=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-694000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0213 18:25:08.036169 38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/addons-444000/client.crt: no such file or directory
E0213 18:25:23.828904 38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:23.834101 38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:23.844251 38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:23.864864 38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:23.906050 38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:23.987705 38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:24.149195 38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:24.470117 38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:25.112148 38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:26.438675 38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:28.999437 38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:34.119703 38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:25:44.361907 38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:26:04.843648 38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:26:45.804198 38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
E0213 18:28:07.766263 38899 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/functional-525000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-694000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m36.369195389s)
-- stdout --
* [ingress-addon-legacy-694000] minikube v1.32.0 on Darwin 14.3.1
- MINIKUBE_LOCATION=18165
- KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-694000 in cluster ingress-addon-legacy-694000
* Pulling base image v0.0.42-1704759386-17866 ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0213 18:24:58.043992 42439 out.go:291] Setting OutFile to fd 1 ...
I0213 18:24:58.044185 42439 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 18:24:58.044192 42439 out.go:304] Setting ErrFile to fd 2...
I0213 18:24:58.044196 42439 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 18:24:58.044374 42439 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18165-38421/.minikube/bin
I0213 18:24:58.045923 42439 out.go:298] Setting JSON to false
I0213 18:24:58.068747 42439 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":14357,"bootTime":1707863141,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
W0213 18:24:58.068860 42439 start.go:136] gopshost.Virtualization returned error: not implemented yet
I0213 18:24:58.093137 42439 out.go:177] * [ingress-addon-legacy-694000] minikube v1.32.0 on Darwin 14.3.1
I0213 18:24:58.155260 42439 out.go:177] - MINIKUBE_LOCATION=18165
I0213 18:24:58.134091 42439 notify.go:220] Checking for updates...
I0213 18:24:58.213002 42439 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/18165-38421/kubeconfig
I0213 18:24:58.255252 42439 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0213 18:24:58.297312 42439 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0213 18:24:58.339913 42439 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18165-38421/.minikube
I0213 18:24:58.382058 42439 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0213 18:24:58.403517 42439 driver.go:392] Setting default libvirt URI to qemu:///system
I0213 18:24:58.461504 42439 docker.go:122] docker version: linux-25.0.3:Docker Desktop 4.27.2 (137060)
I0213 18:24:58.461638 42439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0213 18:24:58.573039 42439 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-14 02:24:58.559430847 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
I0213 18:24:58.595459 42439 out.go:177] * Using the docker driver based on user configuration
I0213 18:24:58.617064 42439 start.go:298] selected driver: docker
I0213 18:24:58.617089 42439 start.go:902] validating driver "docker" against <nil>
I0213 18:24:58.617102 42439 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0213 18:24:58.622084 42439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0213 18:24:58.734536 42439 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:110 SystemTime:2024-02-14 02:24:58.724034769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:26 KernelVersion:6.6.12-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:
https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:6213296128 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name
=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1-desktop.4] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.24] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker D
ev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.21] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.0.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM)
for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Err:failed to fetch metadata: fork/exec /Users/jenkins/.docker/cli-plugins/docker-scan: no such file or directory Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.4.1]] Warnings:<nil>}}
I0213 18:24:58.734728 42439 start_flags.go:307] no existing cluster config was found, will generate one from the flags
I0213 18:24:58.734914 42439 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0213 18:24:58.756377 42439 out.go:177] * Using Docker Desktop driver with root privileges
I0213 18:24:58.778421 42439 cni.go:84] Creating CNI manager for ""
I0213 18:24:58.778457 42439 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0213 18:24:58.778473 42439 start_flags.go:321] config:
{Name:ingress-addon-legacy-694000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-694000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
I0213 18:24:58.800451 42439 out.go:177] * Starting control plane node ingress-addon-legacy-694000 in cluster ingress-addon-legacy-694000
I0213 18:24:58.843471 42439 cache.go:121] Beginning downloading kic base image for docker with docker
I0213 18:24:58.865332 42439 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
I0213 18:24:58.907480 42439 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0213 18:24:58.907537 42439 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
I0213 18:24:58.962689 42439 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
I0213 18:24:58.962716 42439 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
I0213 18:24:59.163747 42439 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0213 18:24:59.163795 42439 cache.go:56] Caching tarball of preloaded images
I0213 18:24:59.164255 42439 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0213 18:24:59.208573 42439 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0213 18:24:59.229947 42439 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0213 18:24:59.773989 42439 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0213 18:25:16.951876 42439 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0213 18:25:16.952075 42439 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0213 18:25:17.585007 42439 cache.go:59] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0213 18:25:17.585248 42439 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/config.json ...
I0213 18:25:17.585275 42439 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/config.json: {Name:mkb247d9310fe07a1dc14b022dbbd70c65616aff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 18:25:17.585579 42439 cache.go:194] Successfully downloaded all kic artifacts
I0213 18:25:17.585609 42439 start.go:365] acquiring machines lock for ingress-addon-legacy-694000: {Name:mk376ced87b7a2e785d303268517d60c6f604567 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0213 18:25:17.586469 42439 start.go:369] acquired machines lock for "ingress-addon-legacy-694000" in 846.713µs
I0213 18:25:17.586515 42439 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-694000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-694000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0213 18:25:17.586580 42439 start.go:125] createHost starting for "" (driver="docker")
I0213 18:25:17.611821 42439 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0213 18:25:17.612167 42439 start.go:159] libmachine.API.Create for "ingress-addon-legacy-694000" (driver="docker")
I0213 18:25:17.612239 42439 client.go:168] LocalClient.Create starting
I0213 18:25:17.612443 42439 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem
I0213 18:25:17.612538 42439 main.go:141] libmachine: Decoding PEM data...
I0213 18:25:17.612570 42439 main.go:141] libmachine: Parsing certificate...
I0213 18:25:17.612665 42439 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem
I0213 18:25:17.612735 42439 main.go:141] libmachine: Decoding PEM data...
I0213 18:25:17.612751 42439 main.go:141] libmachine: Parsing certificate...
I0213 18:25:17.632119 42439 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-694000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0213 18:25:17.684338 42439 cli_runner.go:211] docker network inspect ingress-addon-legacy-694000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0213 18:25:17.684466 42439 network_create.go:281] running [docker network inspect ingress-addon-legacy-694000] to gather additional debugging logs...
I0213 18:25:17.684486 42439 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-694000
W0213 18:25:17.734945 42439 cli_runner.go:211] docker network inspect ingress-addon-legacy-694000 returned with exit code 1
I0213 18:25:17.734978 42439 network_create.go:284] error running [docker network inspect ingress-addon-legacy-694000]: docker network inspect ingress-addon-legacy-694000: exit status 1
stdout:
[]
stderr:
Error response from daemon: network ingress-addon-legacy-694000 not found
I0213 18:25:17.734994 42439 network_create.go:286] output of [docker network inspect ingress-addon-legacy-694000]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network ingress-addon-legacy-694000 not found
** /stderr **
I0213 18:25:17.735145 42439 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0213 18:25:17.786873 42439 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000197550}
I0213 18:25:17.786918 42439 network_create.go:124] attempt to create docker network ingress-addon-legacy-694000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
I0213 18:25:17.786997 42439 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-694000 ingress-addon-legacy-694000
I0213 18:25:17.874681 42439 network_create.go:108] docker network ingress-addon-legacy-694000 192.168.49.0/24 created
I0213 18:25:17.874727 42439 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-694000" container
I0213 18:25:17.874841 42439 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0213 18:25:17.927015 42439 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-694000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-694000 --label created_by.minikube.sigs.k8s.io=true
I0213 18:25:17.978892 42439 oci.go:103] Successfully created a docker volume ingress-addon-legacy-694000
I0213 18:25:17.979014 42439 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-694000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-694000 --entrypoint /usr/bin/test -v ingress-addon-legacy-694000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
I0213 18:25:18.358204 42439 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-694000
I0213 18:25:18.358249 42439 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0213 18:25:18.358264 42439 kic.go:194] Starting extracting preloaded images to volume ...
I0213 18:25:18.358381 42439 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-694000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
I0213 18:25:20.597448 42439 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-694000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (2.239026125s)
I0213 18:25:20.597476 42439 kic.go:203] duration metric: took 2.239248 seconds to extract preloaded images to volume
I0213 18:25:20.597612 42439 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0213 18:25:20.708768 42439 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-694000 --name ingress-addon-legacy-694000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-694000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-694000 --network ingress-addon-legacy-694000 --ip 192.168.49.2 --volume ingress-addon-legacy-694000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
I0213 18:25:20.983883 42439 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-694000 --format={{.State.Running}}
I0213 18:25:21.039626 42439 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-694000 --format={{.State.Status}}
I0213 18:25:21.097836 42439 cli_runner.go:164] Run: docker exec ingress-addon-legacy-694000 stat /var/lib/dpkg/alternatives/iptables
I0213 18:25:21.261511 42439 oci.go:144] the created container "ingress-addon-legacy-694000" has a running status.
I0213 18:25:21.261561 42439 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa...
I0213 18:25:21.498865 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0213 18:25:21.498933 42439 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0213 18:25:21.562684 42439 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-694000 --format={{.State.Status}}
I0213 18:25:21.615938 42439 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0213 18:25:21.615960 42439 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-694000 chown docker:docker /home/docker/.ssh/authorized_keys]
I0213 18:25:21.710432 42439 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-694000 --format={{.State.Status}}
I0213 18:25:21.763849 42439 machine.go:88] provisioning docker machine ...
I0213 18:25:21.763896 42439 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-694000"
I0213 18:25:21.764006 42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
I0213 18:25:21.816900 42439 main.go:141] libmachine: Using SSH client type: native
I0213 18:25:21.817230 42439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 127.0.0.1 53836 <nil> <nil>}
I0213 18:25:21.817247 42439 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-694000 && echo "ingress-addon-legacy-694000" | sudo tee /etc/hostname
I0213 18:25:21.980755 42439 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-694000
I0213 18:25:21.980899 42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
I0213 18:25:22.032916 42439 main.go:141] libmachine: Using SSH client type: native
I0213 18:25:22.033232 42439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 127.0.0.1 53836 <nil> <nil>}
I0213 18:25:22.033248 42439 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-694000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-694000/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-694000' | sudo tee -a /etc/hosts;
fi
fi
I0213 18:25:22.175626 42439 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0213 18:25:22.175644 42439 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/18165-38421/.minikube CaCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18165-38421/.minikube}
I0213 18:25:22.175662 42439 ubuntu.go:177] setting up certificates
I0213 18:25:22.175668 42439 provision.go:83] configureAuth start
I0213 18:25:22.175741 42439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-694000
I0213 18:25:22.227921 42439 provision.go:138] copyHostCerts
I0213 18:25:22.227967 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem
I0213 18:25:22.228018 42439 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem, removing ...
I0213 18:25:22.228025 42439 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem
I0213 18:25:22.228177 42439 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.pem (1078 bytes)
I0213 18:25:22.228365 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem
I0213 18:25:22.228399 42439 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem, removing ...
I0213 18:25:22.228404 42439 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem
I0213 18:25:22.228509 42439 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/cert.pem (1123 bytes)
I0213 18:25:22.228645 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem
I0213 18:25:22.228691 42439 exec_runner.go:144] found /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem, removing ...
I0213 18:25:22.228696 42439 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem
I0213 18:25:22.228785 42439 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18165-38421/.minikube/key.pem (1679 bytes)
I0213 18:25:22.228922 42439 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-694000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-694000]
I0213 18:25:22.306228 42439 provision.go:172] copyRemoteCerts
I0213 18:25:22.306284 42439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0213 18:25:22.306342 42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
I0213 18:25:22.358908 42439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53836 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa Username:docker}
I0213 18:25:22.463528 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0213 18:25:22.463612 42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0213 18:25:22.503323 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem -> /etc/docker/server.pem
I0213 18:25:22.503397 42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0213 18:25:22.543769 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0213 18:25:22.543908 42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0213 18:25:22.584768 42439 provision.go:86] duration metric: configureAuth took 409.052462ms
I0213 18:25:22.584807 42439 ubuntu.go:193] setting minikube options for container-runtime
I0213 18:25:22.585056 42439 config.go:182] Loaded profile config "ingress-addon-legacy-694000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0213 18:25:22.585205 42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
I0213 18:25:22.638654 42439 main.go:141] libmachine: Using SSH client type: native
I0213 18:25:22.638965 42439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 127.0.0.1 53836 <nil> <nil>}
I0213 18:25:22.638982 42439 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0213 18:25:22.776320 42439 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0213 18:25:22.776339 42439 ubuntu.go:71] root file system type: overlay
I0213 18:25:22.776425 42439 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0213 18:25:22.776506 42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
I0213 18:25:22.828730 42439 main.go:141] libmachine: Using SSH client type: native
I0213 18:25:22.829034 42439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 127.0.0.1 53836 <nil> <nil>}
I0213 18:25:22.829088 42439 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0213 18:25:22.992527 42439 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0213 18:25:22.992632 42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
I0213 18:25:23.044840 42439 main.go:141] libmachine: Using SSH client type: native
I0213 18:25:23.072529 42439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1407080] 0x1409d60 <nil> [] 0s} 127.0.0.1 53836 <nil> <nil>}
I0213 18:25:23.072560 42439 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0213 18:25:23.679644 42439 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-10-26 09:06:22.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-02-14 02:25:22.987355255 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0213 18:25:23.679686 42439 machine.go:91] provisioned docker machine in 1.915829512s
I0213 18:25:23.679712 42439 client.go:171] LocalClient.Create took 6.067542671s
I0213 18:25:23.679738 42439 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-694000" took 6.067671783s
I0213 18:25:23.679746 42439 start.go:300] post-start starting for "ingress-addon-legacy-694000" (driver="docker")
I0213 18:25:23.679754 42439 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0213 18:25:23.679866 42439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0213 18:25:23.680033 42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
I0213 18:25:23.733912 42439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53836 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa Username:docker}
I0213 18:25:23.837754 42439 ssh_runner.go:195] Run: cat /etc/os-release
I0213 18:25:23.843236 42439 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0213 18:25:23.843290 42439 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0213 18:25:23.843299 42439 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0213 18:25:23.843304 42439 info.go:137] Remote host: Ubuntu 22.04.3 LTS
I0213 18:25:23.843329 42439 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/addons for local assets ...
I0213 18:25:23.843435 42439 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18165-38421/.minikube/files for local assets ...
I0213 18:25:23.843688 42439 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem -> 388992.pem in /etc/ssl/certs
I0213 18:25:23.843694 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem -> /etc/ssl/certs/388992.pem
I0213 18:25:23.843920 42439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0213 18:25:23.858311 42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /etc/ssl/certs/388992.pem (1708 bytes)
I0213 18:25:23.898179 42439 start.go:303] post-start completed in 218.395299ms
I0213 18:25:23.899090 42439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-694000
I0213 18:25:23.953064 42439 profile.go:148] Saving config to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/config.json ...
I0213 18:25:23.953539 42439 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0213 18:25:23.953627 42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
I0213 18:25:24.006354 42439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53836 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa Username:docker}
I0213 18:25:24.100319 42439 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0213 18:25:24.105089 42439 start.go:128] duration metric: createHost completed in 6.518602481s
I0213 18:25:24.105108 42439 start.go:83] releasing machines lock for "ingress-addon-legacy-694000", held for 6.518725821s
I0213 18:25:24.105193 42439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-694000
I0213 18:25:24.159183 42439 ssh_runner.go:195] Run: cat /version.json
I0213 18:25:24.159194 42439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0213 18:25:24.159264 42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
I0213 18:25:24.159276 42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
I0213 18:25:24.217117 42439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53836 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa Username:docker}
I0213 18:25:24.217117 42439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53836 SSHKeyPath:/Users/jenkins/minikube-integration/18165-38421/.minikube/machines/ingress-addon-legacy-694000/id_rsa Username:docker}
I0213 18:25:24.412005 42439 ssh_runner.go:195] Run: systemctl --version
I0213 18:25:24.416555 42439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0213 18:25:24.421446 42439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0213 18:25:24.463374 42439 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0213 18:25:24.463464 42439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0213 18:25:24.492815 42439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0213 18:25:24.521300 42439 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0213 18:25:24.521350 42439 start.go:475] detecting cgroup driver to use...
I0213 18:25:24.521376 42439 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0213 18:25:24.521578 42439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0213 18:25:24.551366 42439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0213 18:25:24.567601 42439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0213 18:25:24.583450 42439 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0213 18:25:24.583512 42439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0213 18:25:24.599058 42439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0213 18:25:24.615116 42439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0213 18:25:24.631724 42439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0213 18:25:24.647791 42439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0213 18:25:24.663538 42439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0213 18:25:24.680281 42439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0213 18:25:24.695352 42439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0213 18:25:24.710464 42439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0213 18:25:24.768715 42439 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0213 18:25:24.854515 42439 start.go:475] detecting cgroup driver to use...
I0213 18:25:24.854537 42439 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0213 18:25:24.854594 42439 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0213 18:25:24.874401 42439 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0213 18:25:24.874530 42439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0213 18:25:24.893645 42439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0213 18:25:24.924266 42439 ssh_runner.go:195] Run: which cri-dockerd
I0213 18:25:24.928471 42439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0213 18:25:24.943815 42439 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0213 18:25:24.974647 42439 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0213 18:25:25.062237 42439 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0213 18:25:25.121981 42439 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0213 18:25:25.122119 42439 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0213 18:25:25.151842 42439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0213 18:25:25.215379 42439 ssh_runner.go:195] Run: sudo systemctl restart docker
I0213 18:25:25.461626 42439 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0213 18:25:25.486230 42439 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0213 18:25:25.554408 42439 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
I0213 18:25:25.554537 42439 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-694000 dig +short host.docker.internal
I0213 18:25:25.676969 42439 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
I0213 18:25:25.677060 42439 ssh_runner.go:195] Run: grep 192.168.65.254 host.minikube.internal$ /etc/hosts
I0213 18:25:25.682434 42439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0213 18:25:25.700159 42439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-694000
I0213 18:25:25.752756 42439 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0213 18:25:25.752856 42439 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0213 18:25:25.773437 42439 docker.go:685] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0213 18:25:25.773463 42439 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0213 18:25:25.773523 42439 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0213 18:25:25.788607 42439 ssh_runner.go:195] Run: which lz4
I0213 18:25:25.792741 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0213 18:25:25.792890 42439 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0213 18:25:25.797131 42439 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0213 18:25:25.797153 42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
I0213 18:25:32.586326 42439 docker.go:649] Took 6.793567 seconds to copy over tarball
I0213 18:25:32.586396 42439 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0213 18:25:34.306659 42439 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.720270673s)
I0213 18:25:34.306676 42439 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0213 18:25:34.360742 42439 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0213 18:25:34.376352 42439 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
I0213 18:25:34.404653 42439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0213 18:25:34.466278 42439 ssh_runner.go:195] Run: sudo systemctl restart docker
I0213 18:25:35.679557 42439 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.21327802s)
I0213 18:25:35.679775 42439 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0213 18:25:35.699078 42439 docker.go:685] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0213 18:25:35.699094 42439 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0213 18:25:35.699105 42439 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
I0213 18:25:35.704813 42439 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
I0213 18:25:35.704840 42439 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
I0213 18:25:35.705484 42439 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0213 18:25:35.705493 42439 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I0213 18:25:35.705513 42439 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
I0213 18:25:35.705556 42439 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
I0213 18:25:35.705563 42439 image.go:134] retrieving image: registry.k8s.io/pause:3.2
I0213 18:25:35.705586 42439 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
I0213 18:25:35.709811 42439 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
I0213 18:25:35.710288 42439 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
I0213 18:25:35.711584 42439 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
I0213 18:25:35.711736 42439 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0213 18:25:35.711610 42439 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
I0213 18:25:35.711780 42439 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I0213 18:25:35.711817 42439 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
I0213 18:25:35.712000 42439 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
I0213 18:25:37.615884 42439 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
I0213 18:25:37.635494 42439 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0213 18:25:37.635533 42439 docker.go:337] Removing image: registry.k8s.io/pause:3.2
I0213 18:25:37.635599 42439 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
I0213 18:25:37.653555 42439 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
I0213 18:25:37.671259 42439 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
I0213 18:25:37.690990 42439 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
I0213 18:25:37.691013 42439 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
I0213 18:25:37.691073 42439 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
I0213 18:25:37.708488 42439 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
I0213 18:25:37.708773 42439 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
I0213 18:25:37.727608 42439 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
I0213 18:25:37.727639 42439 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
I0213 18:25:37.727713 42439 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
I0213 18:25:37.740470 42439 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
I0213 18:25:37.746329 42439 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
I0213 18:25:37.747705 42439 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
I0213 18:25:37.752816 42439 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
I0213 18:25:37.761508 42439 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
I0213 18:25:37.761542 42439 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
I0213 18:25:37.761633 42439 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
I0213 18:25:37.767353 42439 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
I0213 18:25:37.767378 42439 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
I0213 18:25:37.767439 42439 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
I0213 18:25:37.773118 42439 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
I0213 18:25:37.773148 42439 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
I0213 18:25:37.773212 42439 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
I0213 18:25:37.784140 42439 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
I0213 18:25:37.787773 42439 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I0213 18:25:37.788394 42439 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
I0213 18:25:37.794296 42439 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
I0213 18:25:37.805346 42439 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
I0213 18:25:37.805369 42439 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
I0213 18:25:37.805427 42439 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
I0213 18:25:37.823091 42439 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
I0213 18:25:38.092320 42439 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0213 18:25:38.112816 42439 cache_images.go:92] LoadImages completed in 2.413733817s
W0213 18:25:38.112885 42439 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18165-38421/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
I0213 18:25:38.112993 42439 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0213 18:25:38.160506 42439 cni.go:84] Creating CNI manager for ""
I0213 18:25:38.160523 42439 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0213 18:25:38.160537 42439 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0213 18:25:38.160552 42439 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-694000 NodeName:ingress-addon-legacy-694000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0213 18:25:38.160636 42439 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-694000"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0213 18:25:38.160693 42439 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-694000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-694000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0213 18:25:38.160790 42439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0213 18:25:38.175932 42439 binaries.go:44] Found k8s binaries, skipping transfer
I0213 18:25:38.176010 42439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0213 18:25:38.190422 42439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0213 18:25:38.218867 42439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0213 18:25:38.247881 42439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0213 18:25:38.277729 42439 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0213 18:25:38.282313 42439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0213 18:25:38.300244 42439 certs.go:56] Setting up /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000 for IP: 192.168.49.2
I0213 18:25:38.300326 42439 certs.go:190] acquiring lock for shared ca certs: {Name:mkc5f1a81e3b2f96d4314e8cdee92a3e3396cb89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 18:25:38.300530 42439 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key
I0213 18:25:38.300621 42439 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key
I0213 18:25:38.300674 42439 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/client.key
I0213 18:25:38.300693 42439 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/client.crt with IP's: []
I0213 18:25:38.502529 42439 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/client.crt ...
I0213 18:25:38.502546 42439 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/client.crt: {Name:mk45a255dad4dc9ca0803c595f373b3ab70313f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 18:25:38.502910 42439 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/client.key ...
I0213 18:25:38.502919 42439 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/client.key: {Name:mkb55906d001aff3c57de380613b5cc14210b0cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 18:25:38.503150 42439 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.key.dd3b5fb2
I0213 18:25:38.503164 42439 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0213 18:25:38.755632 42439 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.crt.dd3b5fb2 ...
I0213 18:25:38.755646 42439 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.crt.dd3b5fb2: {Name:mk51d5f9200fbf20e40ef0f598e6720c414689d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 18:25:38.755951 42439 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.key.dd3b5fb2 ...
I0213 18:25:38.755964 42439 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.key.dd3b5fb2: {Name:mk04c4c35039f4e6b18de33df6fe8e2cd297929d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 18:25:38.756171 42439 certs.go:337] copying /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.crt
I0213 18:25:38.756359 42439 certs.go:341] copying /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.key
I0213 18:25:38.756517 42439 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.key
I0213 18:25:38.756532 42439 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.crt with IP's: []
I0213 18:25:38.925357 42439 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.crt ...
I0213 18:25:38.925368 42439 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.crt: {Name:mkbb681791d725d411b674f575b79f85eada4c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 18:25:38.925627 42439 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.key ...
I0213 18:25:38.925636 42439 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.key: {Name:mkf584986fd7249feec69304158aae04b02db8d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0213 18:25:38.925840 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0213 18:25:38.925873 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0213 18:25:38.925903 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0213 18:25:38.925920 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0213 18:25:38.925937 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0213 18:25:38.925955 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0213 18:25:38.925978 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0213 18:25:38.926006 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0213 18:25:38.926096 42439 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem (1338 bytes)
W0213 18:25:38.926145 42439 certs.go:433] ignoring /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899_empty.pem, impossibly tiny 0 bytes
I0213 18:25:38.926154 42439 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca-key.pem (1675 bytes)
I0213 18:25:38.926184 42439 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/ca.pem (1078 bytes)
I0213 18:25:38.926213 42439 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/cert.pem (1123 bytes)
I0213 18:25:38.926245 42439 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/certs/key.pem (1679 bytes)
I0213 18:25:38.926309 42439 certs.go:437] found cert: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem (1708 bytes)
I0213 18:25:38.926358 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0213 18:25:38.926379 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem -> /usr/share/ca-certificates/38899.pem
I0213 18:25:38.926397 42439 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem -> /usr/share/ca-certificates/388992.pem
I0213 18:25:38.926894 42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0213 18:25:38.968725 42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0213 18:25:39.009286 42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0213 18:25:39.050795 42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/profiles/ingress-addon-legacy-694000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0213 18:25:39.091770 42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0213 18:25:39.132119 42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0213 18:25:39.173413 42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0213 18:25:39.215231 42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0213 18:25:39.256013 42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0213 18:25:39.296533 42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/certs/38899.pem --> /usr/share/ca-certificates/38899.pem (1338 bytes)
I0213 18:25:39.337379 42439 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18165-38421/.minikube/files/etc/ssl/certs/388992.pem --> /usr/share/ca-certificates/388992.pem (1708 bytes)
I0213 18:25:39.379805 42439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0213 18:25:39.409029 42439 ssh_runner.go:195] Run: openssl version
I0213 18:25:39.415123 42439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38899.pem && ln -fs /usr/share/ca-certificates/38899.pem /etc/ssl/certs/38899.pem"
I0213 18:25:39.431418 42439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38899.pem
I0213 18:25:39.435812 42439 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 02:17 /usr/share/ca-certificates/38899.pem
I0213 18:25:39.435860 42439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38899.pem
I0213 18:25:39.442399 42439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/38899.pem /etc/ssl/certs/51391683.0"
I0213 18:25:39.458099 42439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/388992.pem && ln -fs /usr/share/ca-certificates/388992.pem /etc/ssl/certs/388992.pem"
I0213 18:25:39.474078 42439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/388992.pem
I0213 18:25:39.478872 42439 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 02:17 /usr/share/ca-certificates/388992.pem
I0213 18:25:39.478930 42439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/388992.pem
I0213 18:25:39.485473 42439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/388992.pem /etc/ssl/certs/3ec20f2e.0"
I0213 18:25:39.501527 42439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0213 18:25:39.517611 42439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0213 18:25:39.522169 42439 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:09 /usr/share/ca-certificates/minikubeCA.pem
I0213 18:25:39.522215 42439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0213 18:25:39.528689 42439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0213 18:25:39.544510 42439 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0213 18:25:39.548698 42439 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0213 18:25:39.548761 42439 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-694000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-694000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
I0213 18:25:39.548860 42439 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0213 18:25:39.566426 42439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0213 18:25:39.581499 42439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0213 18:25:39.596852 42439 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0213 18:25:39.596944 42439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0213 18:25:39.612465 42439 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0213 18:25:39.612500 42439 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0213 18:25:39.679151 42439 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0213 18:25:39.679233 42439 kubeadm.go:322] [preflight] Running pre-flight checks
I0213 18:25:39.980055 42439 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0213 18:25:39.980218 42439 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0213 18:25:39.980382 42439 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0213 18:25:40.197775 42439 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0213 18:25:40.198416 42439 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0213 18:25:40.198453 42439 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0213 18:25:40.270262 42439 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0213 18:25:40.291857 42439 out.go:204] - Generating certificates and keys ...
I0213 18:25:40.291993 42439 kubeadm.go:322] [certs] Using existing ca certificate authority
I0213 18:25:40.292088 42439 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0213 18:25:40.448827 42439 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0213 18:25:40.628033 42439 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0213 18:25:40.869566 42439 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0213 18:25:40.940807 42439 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0213 18:25:41.032352 42439 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0213 18:25:41.032517 42439 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-694000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0213 18:25:41.260458 42439 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0213 18:25:41.260679 42439 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-694000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0213 18:25:41.333320 42439 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0213 18:25:41.529128 42439 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0213 18:25:41.597342 42439 kubeadm.go:322] [certs] Generating "sa" key and public key
I0213 18:25:41.597443 42439 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0213 18:25:41.792254 42439 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0213 18:25:41.835280 42439 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0213 18:25:41.934746 42439 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0213 18:25:42.051969 42439 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0213 18:25:42.052618 42439 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0213 18:25:42.076749 42439 out.go:204] - Booting up control plane ...
I0213 18:25:42.076911 42439 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0213 18:25:42.077038 42439 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0213 18:25:42.077173 42439 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0213 18:25:42.077296 42439 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0213 18:25:42.077559 42439 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0213 18:26:22.062630 42439 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0213 18:26:22.063347 42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 18:26:22.063486 42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 18:26:27.065022 42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 18:26:27.065274 42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 18:26:37.067610 42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 18:26:37.067828 42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 18:26:57.069919 42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 18:26:57.070112 42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 18:27:37.100562 42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 18:27:37.100785 42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 18:27:37.100808 42439 kubeadm.go:322]
I0213 18:27:37.100859 42439 kubeadm.go:322] Unfortunately, an error has occurred:
I0213 18:27:37.100904 42439 kubeadm.go:322] timed out waiting for the condition
I0213 18:27:37.100913 42439 kubeadm.go:322]
I0213 18:27:37.100950 42439 kubeadm.go:322] This error is likely caused by:
I0213 18:27:37.100988 42439 kubeadm.go:322] - The kubelet is not running
I0213 18:27:37.101090 42439 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0213 18:27:37.101097 42439 kubeadm.go:322]
I0213 18:27:37.101208 42439 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0213 18:27:37.101272 42439 kubeadm.go:322] - 'systemctl status kubelet'
I0213 18:27:37.101318 42439 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0213 18:27:37.101328 42439 kubeadm.go:322]
I0213 18:27:37.101442 42439 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0213 18:27:37.101528 42439 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0213 18:27:37.101535 42439 kubeadm.go:322]
I0213 18:27:37.101627 42439 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0213 18:27:37.101690 42439 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0213 18:27:37.101777 42439 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0213 18:27:37.101810 42439 kubeadm.go:322] - 'docker logs CONTAINERID'
I0213 18:27:37.101819 42439 kubeadm.go:322]
I0213 18:27:37.105805 42439 kubeadm.go:322] W0214 02:25:39.677836 1692 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0213 18:27:37.105939 42439 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0213 18:27:37.106023 42439 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0213 18:27:37.106186 42439 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
I0213 18:27:37.106294 42439 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0213 18:27:37.106385 42439 kubeadm.go:322] W0214 02:25:42.058099 1692 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0213 18:27:37.106467 42439 kubeadm.go:322] W0214 02:25:42.058969 1692 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0213 18:27:37.106524 42439 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0213 18:27:37.106592 42439 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0213 18:27:37.106670 42439 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-694000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-694000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0214 02:25:39.677836 1692 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0214 02:25:42.058099 1692 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0214 02:25:42.058969 1692 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-694000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-694000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0214 02:25:39.677836 1692 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0214 02:25:42.058099 1692 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0214 02:25:42.058969 1692 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0213 18:27:37.106706 42439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0213 18:27:37.526096 42439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0213 18:27:37.543464 42439 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0213 18:27:37.543530 42439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0213 18:27:37.558194 42439 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0213 18:27:37.558229 42439 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0213 18:27:37.613132 42439 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0213 18:27:37.613259 42439 kubeadm.go:322] [preflight] Running pre-flight checks
I0213 18:27:37.847822 42439 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0213 18:27:37.847901 42439 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0213 18:27:37.847983 42439 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0213 18:27:38.013054 42439 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0213 18:27:38.014546 42439 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0213 18:27:38.014585 42439 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0213 18:27:38.079484 42439 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0213 18:27:38.115748 42439 out.go:204] - Generating certificates and keys ...
I0213 18:27:38.115818 42439 kubeadm.go:322] [certs] Using existing ca certificate authority
I0213 18:27:38.115929 42439 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0213 18:27:38.116015 42439 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0213 18:27:38.116090 42439 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0213 18:27:38.116186 42439 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0213 18:27:38.116277 42439 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0213 18:27:38.116366 42439 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0213 18:27:38.116406 42439 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0213 18:27:38.116493 42439 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0213 18:27:38.116621 42439 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0213 18:27:38.116670 42439 kubeadm.go:322] [certs] Using the existing "sa" key
I0213 18:27:38.116782 42439 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0213 18:27:38.327345 42439 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0213 18:27:38.475374 42439 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0213 18:27:38.677019 42439 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0213 18:27:38.825940 42439 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0213 18:27:38.826786 42439 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0213 18:27:38.848295 42439 out.go:204] - Booting up control plane ...
I0213 18:27:38.848389 42439 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0213 18:27:38.848476 42439 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0213 18:27:38.848554 42439 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0213 18:27:38.848659 42439 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0213 18:27:38.848838 42439 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0213 18:28:18.848610 42439 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0213 18:28:18.849281 42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 18:28:18.849486 42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 18:28:23.851163 42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 18:28:23.851384 42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 18:28:33.853397 42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 18:28:33.853530 42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 18:28:53.855874 42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 18:28:53.856120 42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 18:29:33.858556 42439 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0213 18:29:33.858788 42439 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0213 18:29:33.858803 42439 kubeadm.go:322]
I0213 18:29:33.858838 42439 kubeadm.go:322] Unfortunately, an error has occurred:
I0213 18:29:33.858876 42439 kubeadm.go:322] timed out waiting for the condition
I0213 18:29:33.858884 42439 kubeadm.go:322]
I0213 18:29:33.858915 42439 kubeadm.go:322] This error is likely caused by:
I0213 18:29:33.858965 42439 kubeadm.go:322] - The kubelet is not running
I0213 18:29:33.859080 42439 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0213 18:29:33.859087 42439 kubeadm.go:322]
I0213 18:29:33.859201 42439 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0213 18:29:33.859238 42439 kubeadm.go:322] - 'systemctl status kubelet'
I0213 18:29:33.859270 42439 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0213 18:29:33.859295 42439 kubeadm.go:322]
I0213 18:29:33.859424 42439 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0213 18:29:33.859507 42439 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0213 18:29:33.859520 42439 kubeadm.go:322]
I0213 18:29:33.859635 42439 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0213 18:29:33.859694 42439 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0213 18:29:33.859776 42439 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0213 18:29:33.859809 42439 kubeadm.go:322] - 'docker logs CONTAINERID'
I0213 18:29:33.859816 42439 kubeadm.go:322]
I0213 18:29:33.863937 42439 kubeadm.go:322] W0214 02:27:37.603285 4698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0213 18:29:33.864075 42439 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0213 18:29:33.864140 42439 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0213 18:29:33.864251 42439 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
I0213 18:29:33.864334 42439 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0213 18:29:33.864434 42439 kubeadm.go:322] W0214 02:27:38.820504 4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0213 18:29:33.864540 42439 kubeadm.go:322] W0214 02:27:38.821386 4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0213 18:29:33.864608 42439 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0213 18:29:33.864672 42439 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0213 18:29:33.864710 42439 kubeadm.go:406] StartCluster complete in 3m54.274127564s
I0213 18:29:33.864797 42439 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0213 18:29:33.882606 42439 logs.go:276] 0 containers: []
W0213 18:29:33.882621 42439 logs.go:278] No container was found matching "kube-apiserver"
I0213 18:29:33.882688 42439 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0213 18:29:33.901477 42439 logs.go:276] 0 containers: []
W0213 18:29:33.901491 42439 logs.go:278] No container was found matching "etcd"
I0213 18:29:33.901559 42439 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0213 18:29:33.918997 42439 logs.go:276] 0 containers: []
W0213 18:29:33.919014 42439 logs.go:278] No container was found matching "coredns"
I0213 18:29:33.919092 42439 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0213 18:29:33.936703 42439 logs.go:276] 0 containers: []
W0213 18:29:33.936716 42439 logs.go:278] No container was found matching "kube-scheduler"
I0213 18:29:33.936784 42439 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0213 18:29:33.954643 42439 logs.go:276] 0 containers: []
W0213 18:29:33.954658 42439 logs.go:278] No container was found matching "kube-proxy"
I0213 18:29:33.954722 42439 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0213 18:29:33.971642 42439 logs.go:276] 0 containers: []
W0213 18:29:33.971657 42439 logs.go:278] No container was found matching "kube-controller-manager"
I0213 18:29:33.971724 42439 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0213 18:29:33.989744 42439 logs.go:276] 0 containers: []
W0213 18:29:33.989759 42439 logs.go:278] No container was found matching "kindnet"
I0213 18:29:33.989767 42439 logs.go:123] Gathering logs for kubelet ...
I0213 18:29:33.989783 42439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0213 18:29:34.031751 42439 logs.go:123] Gathering logs for dmesg ...
I0213 18:29:34.031766 42439 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0213 18:29:34.051795 42439 logs.go:123] Gathering logs for describe nodes ...
I0213 18:29:34.051810 42439 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0213 18:29:34.104430 42439 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0213 18:29:34.104443 42439 logs.go:123] Gathering logs for Docker ...
I0213 18:29:34.104453 42439 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0213 18:29:34.125757 42439 logs.go:123] Gathering logs for container status ...
I0213 18:29:34.125772 42439 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0213 18:29:34.185133 42439 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0214 02:27:37.603285 4698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0214 02:27:38.820504 4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0214 02:27:38.821386 4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0213 18:29:34.185156 42439 out.go:239] *
*
W0213 18:29:34.185194 42439 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0214 02:27:37.603285 4698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0214 02:27:38.820504 4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0214 02:27:38.821386 4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0214 02:27:37.603285 4698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0214 02:27:38.820504 4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0214 02:27:38.821386 4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0213 18:29:34.185215 42439 out.go:239] *
*
W0213 18:29:34.185840 42439 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0213 18:29:34.271714 42439 out.go:177]
W0213 18:29:34.314627 42439 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0214 02:27:37.603285 4698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0214 02:27:38.820504 4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0214 02:27:38.821386 4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0214 02:27:37.603285 4698 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0214 02:27:38.820504 4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0214 02:27:38.821386 4698 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0213 18:29:34.314688 42439 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0213 18:29:34.314712 42439 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0213 18:29:34.336650 42439 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-694000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (276.41s)