=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-779000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E0613 11:54:22.779654 20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 11:56:38.935326 20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 11:56:42.366927 20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:42.373381 20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:42.384776 20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:42.405175 20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:42.445783 20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:42.526001 20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:42.687986 20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:43.008158 20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:43.648633 20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:44.930892 20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:47.492112 20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:56:52.614049 20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:57:02.856613 20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:57:06.626790 20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/addons-054000/client.crt: no such file or directory
E0613 11:57:23.337519 20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
E0613 11:58:04.300215 20800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/functional-216000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-779000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m37.650451816s)
-- stdout --
* [ingress-addon-legacy-779000] minikube v1.30.1 on Darwin 13.4
- MINIKUBE_LOCATION=15003
- KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-779000 in cluster ingress-addon-legacy-779000
* Pulling base image ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 24.0.2 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I0613 11:54:10.823011 23427 out.go:296] Setting OutFile to fd 1 ...
I0613 11:54:10.823179 23427 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0613 11:54:10.823185 23427 out.go:309] Setting ErrFile to fd 2...
I0613 11:54:10.823190 23427 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0613 11:54:10.823303 23427 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15003-20351/.minikube/bin
I0613 11:54:10.824801 23427 out.go:303] Setting JSON to false
I0613 11:54:10.843964 23427 start.go:127] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6821,"bootTime":1686675629,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
W0613 11:54:10.844056 23427 start.go:135] gopshost.Virtualization returned error: not implemented yet
I0613 11:54:10.865606 23427 out.go:177] * [ingress-addon-legacy-779000] minikube v1.30.1 on Darwin 13.4
I0613 11:54:10.908628 23427 out.go:177] - MINIKUBE_LOCATION=15003
I0613 11:54:10.908583 23427 notify.go:220] Checking for updates...
I0613 11:54:10.930862 23427 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15003-20351/kubeconfig
I0613 11:54:10.952570 23427 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0613 11:54:10.973502 23427 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0613 11:54:10.995609 23427 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15003-20351/.minikube
I0613 11:54:11.017532 23427 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0613 11:54:11.039190 23427 driver.go:373] Setting default libvirt URI to qemu:///system
I0613 11:54:11.097187 23427 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.20.1 (110738)
I0613 11:54:11.097315 23427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0613 11:54:11.192011 23427 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:61 SystemTime:2023-06-13 18:54:11.181285431 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
I0613 11:54:11.213970 23427 out.go:177] * Using the docker driver based on user configuration
I0613 11:54:11.235819 23427 start.go:297] selected driver: docker
I0613 11:54:11.235846 23427 start.go:884] validating driver "docker" against <nil>
I0613 11:54:11.235866 23427 start.go:895] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0613 11:54:11.239933 23427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0613 11:54:11.333628 23427 info.go:266] docker info: {ID:c18f23ef-4e44-410e-b2ce-38db72a58e15 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:false NGoroutines:61 SystemTime:2023-06-13 18:54:11.321874773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:7 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.18.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.19] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.4] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.12.0]] Warnings:<
nil>}}
I0613 11:54:11.333789 23427 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0613 11:54:11.333972 23427 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0613 11:54:11.357447 23427 out.go:177] * Using Docker Desktop driver with root privileges
I0613 11:54:11.378212 23427 cni.go:84] Creating CNI manager for ""
I0613 11:54:11.378260 23427 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0613 11:54:11.378272 23427 start_flags.go:319] config:
{Name:ingress-addon-legacy-779000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-779000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0613 11:54:11.399045 23427 out.go:177] * Starting control plane node ingress-addon-legacy-779000 in cluster ingress-addon-legacy-779000
I0613 11:54:11.441496 23427 cache.go:122] Beginning downloading kic base image for docker with docker
I0613 11:54:11.463240 23427 out.go:177] * Pulling base image ...
I0613 11:54:11.506547 23427 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0613 11:54:11.506583 23427 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon
I0613 11:54:11.557336 23427 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b in local docker daemon, skipping pull
I0613 11:54:11.557361 23427 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b exists in daemon, skipping load
I0613 11:54:11.605208 23427 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0613 11:54:11.605227 23427 cache.go:57] Caching tarball of preloaded images
I0613 11:54:11.605482 23427 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0613 11:54:11.627161 23427 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0613 11:54:11.670227 23427 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0613 11:54:11.887437 23427 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0613 11:54:27.746063 23427 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0613 11:54:27.746259 23427 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0613 11:54:28.367884 23427 cache.go:60] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0613 11:54:28.368143 23427 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/config.json ...
I0613 11:54:28.368172 23427 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/config.json: {Name:mk38925f429f1551ce8de16609abb39837213218 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0613 11:54:28.368489 23427 cache.go:195] Successfully downloaded all kic artifacts
I0613 11:54:28.368512 23427 start.go:365] acquiring machines lock for ingress-addon-legacy-779000: {Name:mk814d28bdc1de21db092a373c6c7d9d40f769d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0613 11:54:28.368651 23427 start.go:369] acquired machines lock for "ingress-addon-legacy-779000" in 131.826µs
I0613 11:54:28.368672 23427 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-779000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-779000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0613 11:54:28.368725 23427 start.go:125] createHost starting for "" (driver="docker")
I0613 11:54:28.391423 23427 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0613 11:54:28.391755 23427 start.go:159] libmachine.API.Create for "ingress-addon-legacy-779000" (driver="docker")
I0613 11:54:28.391806 23427 client.go:168] LocalClient.Create starting
I0613 11:54:28.391997 23427 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem
I0613 11:54:28.392070 23427 main.go:141] libmachine: Decoding PEM data...
I0613 11:54:28.392103 23427 main.go:141] libmachine: Parsing certificate...
I0613 11:54:28.392232 23427 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem
I0613 11:54:28.392291 23427 main.go:141] libmachine: Decoding PEM data...
I0613 11:54:28.392309 23427 main.go:141] libmachine: Parsing certificate...
I0613 11:54:28.412280 23427 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-779000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0613 11:54:28.465965 23427 cli_runner.go:211] docker network inspect ingress-addon-legacy-779000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0613 11:54:28.466089 23427 network_create.go:281] running [docker network inspect ingress-addon-legacy-779000] to gather additional debugging logs...
I0613 11:54:28.466107 23427 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-779000
W0613 11:54:28.516643 23427 cli_runner.go:211] docker network inspect ingress-addon-legacy-779000 returned with exit code 1
I0613 11:54:28.516666 23427 network_create.go:284] error running [docker network inspect ingress-addon-legacy-779000]: docker network inspect ingress-addon-legacy-779000: exit status 1
stdout:
[]
stderr:
Error response from daemon: network ingress-addon-legacy-779000 not found
I0613 11:54:28.516691 23427 network_create.go:286] output of [docker network inspect ingress-addon-legacy-779000]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network ingress-addon-legacy-779000 not found
** /stderr **
I0613 11:54:28.516778 23427 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0613 11:54:28.566767 23427 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00079aed0}
I0613 11:54:28.566806 23427 network_create.go:123] attempt to create docker network ingress-addon-legacy-779000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
I0613 11:54:28.566887 23427 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-779000 ingress-addon-legacy-779000
I0613 11:54:28.649238 23427 network_create.go:107] docker network ingress-addon-legacy-779000 192.168.49.0/24 created
I0613 11:54:28.649273 23427 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-779000" container
I0613 11:54:28.649383 23427 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0613 11:54:28.697876 23427 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-779000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-779000 --label created_by.minikube.sigs.k8s.io=true
I0613 11:54:28.747999 23427 oci.go:103] Successfully created a docker volume ingress-addon-legacy-779000
I0613 11:54:28.748144 23427 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-779000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-779000 --entrypoint /usr/bin/test -v ingress-addon-legacy-779000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -d /var/lib
I0613 11:54:29.141540 23427 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-779000
I0613 11:54:29.141578 23427 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0613 11:54:29.141593 23427 kic.go:190] Starting extracting preloaded images to volume ...
I0613 11:54:29.141735 23427 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-779000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir
I0613 11:54:35.139433 23427 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-779000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b -I lz4 -xf /preloaded.tar -C /extractDir: (5.99739474s)
I0613 11:54:35.139459 23427 kic.go:199] duration metric: took 5.997683 seconds to extract preloaded images to volume
I0613 11:54:35.139591 23427 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0613 11:54:35.242781 23427 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-779000 --name ingress-addon-legacy-779000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-779000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-779000 --network ingress-addon-legacy-779000 --ip 192.168.49.2 --volume ingress-addon-legacy-779000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b
I0613 11:54:35.525225 23427 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-779000 --format={{.State.Running}}
I0613 11:54:35.578891 23427 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-779000 --format={{.State.Status}}
I0613 11:54:35.638705 23427 cli_runner.go:164] Run: docker exec ingress-addon-legacy-779000 stat /var/lib/dpkg/alternatives/iptables
I0613 11:54:35.745576 23427 oci.go:144] the created container "ingress-addon-legacy-779000" has a running status.
I0613 11:54:35.745621 23427 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/ingress-addon-legacy-779000/id_rsa...
I0613 11:54:36.010741 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/ingress-addon-legacy-779000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0613 11:54:36.010822 23427 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/ingress-addon-legacy-779000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0613 11:54:36.072127 23427 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-779000 --format={{.State.Status}}
I0613 11:54:36.126364 23427 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0613 11:54:36.126383 23427 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-779000 chown docker:docker /home/docker/.ssh/authorized_keys]
I0613 11:54:36.217614 23427 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-779000 --format={{.State.Status}}
I0613 11:54:36.268828 23427 machine.go:88] provisioning docker machine ...
I0613 11:54:36.268873 23427 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-779000"
I0613 11:54:36.268989 23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
I0613 11:54:36.320226 23427 main.go:141] libmachine: Using SSH client type: native
I0613 11:54:36.320614 23427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil> [] 0s} 127.0.0.1 56371 <nil> <nil>}
I0613 11:54:36.320630 23427 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-779000 && echo "ingress-addon-legacy-779000" | sudo tee /etc/hostname
I0613 11:54:36.449082 23427 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-779000
I0613 11:54:36.449174 23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
I0613 11:54:36.499205 23427 main.go:141] libmachine: Using SSH client type: native
I0613 11:54:36.499557 23427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil> [] 0s} 127.0.0.1 56371 <nil> <nil>}
I0613 11:54:36.499571 23427 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-779000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-779000/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-779000' | sudo tee -a /etc/hosts;
fi
fi
I0613 11:54:36.618985 23427 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0613 11:54:36.619010 23427 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15003-20351/.minikube CaCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15003-20351/.minikube}
I0613 11:54:36.619029 23427 ubuntu.go:177] setting up certificates
I0613 11:54:36.619043 23427 provision.go:83] configureAuth start
I0613 11:54:36.619132 23427 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-779000
I0613 11:54:36.668936 23427 provision.go:138] copyHostCerts
I0613 11:54:36.668987 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem
I0613 11:54:36.669048 23427 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem, removing ...
I0613 11:54:36.669059 23427 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem
I0613 11:54:36.669202 23427 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.pem (1082 bytes)
I0613 11:54:36.669413 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem
I0613 11:54:36.669471 23427 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem, removing ...
I0613 11:54:36.669476 23427 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem
I0613 11:54:36.669542 23427 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/cert.pem (1123 bytes)
I0613 11:54:36.669676 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem
I0613 11:54:36.669717 23427 exec_runner.go:144] found /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem, removing ...
I0613 11:54:36.669722 23427 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem
I0613 11:54:36.669782 23427 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15003-20351/.minikube/key.pem (1679 bytes)
I0613 11:54:36.669923 23427 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-779000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-779000]
I0613 11:54:36.732525 23427 provision.go:172] copyRemoteCerts
I0613 11:54:36.732593 23427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0613 11:54:36.732648 23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
I0613 11:54:36.783106 23427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56371 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/ingress-addon-legacy-779000/id_rsa Username:docker}
I0613 11:54:36.872436 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0613 11:54:36.872515 23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0613 11:54:36.894516 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem -> /etc/docker/server.pem
I0613 11:54:36.894587 23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
I0613 11:54:36.916348 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0613 11:54:36.916419 23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0613 11:54:36.938462 23427 provision.go:86] duration metric: configureAuth took 319.396158ms
I0613 11:54:36.938480 23427 ubuntu.go:193] setting minikube options for container-runtime
I0613 11:54:36.938637 23427 config.go:182] Loaded profile config "ingress-addon-legacy-779000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0613 11:54:36.938704 23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
I0613 11:54:36.990530 23427 main.go:141] libmachine: Using SSH client type: native
I0613 11:54:36.990878 23427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil> [] 0s} 127.0.0.1 56371 <nil> <nil>}
I0613 11:54:36.990904 23427 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0613 11:54:37.110800 23427 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0613 11:54:37.110815 23427 ubuntu.go:71] root file system type: overlay
I0613 11:54:37.110907 23427 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0613 11:54:37.110988 23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
I0613 11:54:37.160524 23427 main.go:141] libmachine: Using SSH client type: native
I0613 11:54:37.160864 23427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil> [] 0s} 127.0.0.1 56371 <nil> <nil>}
I0613 11:54:37.160912 23427 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0613 11:54:37.287788 23427 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0613 11:54:37.287885 23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
I0613 11:54:37.338339 23427 main.go:141] libmachine: Using SSH client type: native
I0613 11:54:37.338683 23427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cac0] 0x140fb60 <nil> [] 0s} 127.0.0.1 56371 <nil> <nil>}
I0613 11:54:37.338696 23427 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0613 11:54:38.001127 23427 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-05-25 21:51:00.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-06-13 18:54:37.284599717 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0613 11:54:38.001155 23427 machine.go:91] provisioned docker machine in 1.732254071s
I0613 11:54:38.001164 23427 client.go:171] LocalClient.Create took 9.609063576s
I0613 11:54:38.001181 23427 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-779000" took 9.609140761s
I0613 11:54:38.001192 23427 start.go:300] post-start starting for "ingress-addon-legacy-779000" (driver="docker")
I0613 11:54:38.001205 23427 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0613 11:54:38.001292 23427 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0613 11:54:38.001360 23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
I0613 11:54:38.052179 23427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56371 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/ingress-addon-legacy-779000/id_rsa Username:docker}
I0613 11:54:38.142086 23427 ssh_runner.go:195] Run: cat /etc/os-release
I0613 11:54:38.146206 23427 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0613 11:54:38.146234 23427 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0613 11:54:38.146242 23427 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0613 11:54:38.146246 23427 info.go:137] Remote host: Ubuntu 22.04.2 LTS
I0613 11:54:38.146255 23427 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15003-20351/.minikube/addons for local assets ...
I0613 11:54:38.146346 23427 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15003-20351/.minikube/files for local assets ...
I0613 11:54:38.146539 23427 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem -> 208002.pem in /etc/ssl/certs
I0613 11:54:38.146546 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem -> /etc/ssl/certs/208002.pem
I0613 11:54:38.146726 23427 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0613 11:54:38.155691 23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem --> /etc/ssl/certs/208002.pem (1708 bytes)
I0613 11:54:38.177454 23427 start.go:303] post-start completed in 176.240976ms
I0613 11:54:38.177991 23427 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-779000
I0613 11:54:38.227190 23427 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/config.json ...
I0613 11:54:38.227644 23427 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0613 11:54:38.227713 23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
I0613 11:54:38.277216 23427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56371 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/ingress-addon-legacy-779000/id_rsa Username:docker}
I0613 11:54:38.362769 23427 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0613 11:54:38.368057 23427 start.go:128] duration metric: createHost completed in 9.999019363s
I0613 11:54:38.368077 23427 start.go:83] releasing machines lock for "ingress-addon-legacy-779000", held for 9.999118341s
I0613 11:54:38.368177 23427 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-779000
I0613 11:54:38.418890 23427 ssh_runner.go:195] Run: cat /version.json
I0613 11:54:38.418936 23427 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0613 11:54:38.418963 23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
I0613 11:54:38.419017 23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
I0613 11:54:38.474553 23427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56371 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/ingress-addon-legacy-779000/id_rsa Username:docker}
I0613 11:54:38.474584 23427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56371 SSHKeyPath:/Users/jenkins/minikube-integration/15003-20351/.minikube/machines/ingress-addon-legacy-779000/id_rsa Username:docker}
I0613 11:54:38.662799 23427 ssh_runner.go:195] Run: systemctl --version
I0613 11:54:38.668234 23427 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0613 11:54:38.673574 23427 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0613 11:54:38.696647 23427 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0613 11:54:38.696720 23427 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0613 11:54:38.712847 23427 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0613 11:54:38.729063 23427 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0613 11:54:38.729077 23427 start.go:464] detecting cgroup driver to use...
I0613 11:54:38.729092 23427 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0613 11:54:38.729204 23427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0613 11:54:38.744995 23427 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0613 11:54:38.755045 23427 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0613 11:54:38.764925 23427 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0613 11:54:38.764985 23427 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0613 11:54:38.774968 23427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0613 11:54:38.784895 23427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0613 11:54:38.794562 23427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0613 11:54:38.804464 23427 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0613 11:54:38.813979 23427 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0613 11:54:38.824108 23427 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0613 11:54:38.832848 23427 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0613 11:54:38.841748 23427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0613 11:54:38.911095 23427 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0613 11:54:38.989458 23427 start.go:464] detecting cgroup driver to use...
I0613 11:54:38.989477 23427 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0613 11:54:38.989541 23427 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0613 11:54:39.001198 23427 cruntime.go:276] skipping containerd shutdown because we are bound to it
I0613 11:54:39.001268 23427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0613 11:54:39.013274 23427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0613 11:54:39.031801 23427 ssh_runner.go:195] Run: which cri-dockerd
I0613 11:54:39.036785 23427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0613 11:54:39.047281 23427 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0613 11:54:39.065747 23427 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0613 11:54:39.166831 23427 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0613 11:54:39.259644 23427 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
I0613 11:54:39.259663 23427 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0613 11:54:39.277329 23427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0613 11:54:39.369042 23427 ssh_runner.go:195] Run: sudo systemctl restart docker
I0613 11:54:39.617663 23427 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0613 11:54:39.644691 23427 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0613 11:54:39.718212 23427 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.2 ...
I0613 11:54:39.718413 23427 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-779000 dig +short host.docker.internal
I0613 11:54:39.825476 23427 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
I0613 11:54:39.825597 23427 ssh_runner.go:195] Run: grep 192.168.65.254 host.minikube.internal$ /etc/hosts
I0613 11:54:39.830659 23427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0613 11:54:39.842029 23427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-779000
I0613 11:54:39.894646 23427 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0613 11:54:39.894733 23427 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0613 11:54:39.916152 23427 docker.go:636] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0613 11:54:39.916175 23427 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0613 11:54:39.916252 23427 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0613 11:54:39.925603 23427 ssh_runner.go:195] Run: which lz4
I0613 11:54:39.929999 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0613 11:54:39.930133 23427 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0613 11:54:39.934359 23427 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0613 11:54:39.934385 23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
I0613 11:54:45.825419 23427 docker.go:600] Took 5.895175 seconds to copy over tarball
I0613 11:54:45.849756 23427 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0613 11:54:48.238625 23427 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.388770785s)
I0613 11:54:48.238641 23427 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0613 11:54:48.320737 23427 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0613 11:54:48.330050 23427 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
I0613 11:54:48.346119 23427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0613 11:54:48.416801 23427 ssh_runner.go:195] Run: sudo systemctl restart docker
I0613 11:54:49.697923 23427 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.281061948s)
I0613 11:54:49.698031 23427 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0613 11:54:49.719059 23427 docker.go:636] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0613 11:54:49.719080 23427 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0613 11:54:49.719088 23427 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
I0613 11:54:49.725175 23427 image.go:134] retrieving image: registry.k8s.io/pause:3.2
I0613 11:54:49.725175 23427 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0613 11:54:49.725485 23427 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
I0613 11:54:49.726360 23427 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
I0613 11:54:49.726451 23427 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I0613 11:54:49.726570 23427 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
I0613 11:54:49.726929 23427 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
I0613 11:54:49.727173 23427 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
I0613 11:54:49.732746 23427 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
I0613 11:54:49.732942 23427 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0613 11:54:49.733884 23427 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
I0613 11:54:49.734147 23427 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
I0613 11:54:49.734409 23427 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I0613 11:54:49.735921 23427 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
I0613 11:54:49.736198 23427 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
I0613 11:54:49.736818 23427 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
I0613 11:54:50.861250 23427 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
I0613 11:54:50.883845 23427 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0613 11:54:50.883889 23427 docker.go:316] Removing image: registry.k8s.io/pause:3.2
I0613 11:54:50.883949 23427 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
I0613 11:54:50.905495 23427 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
I0613 11:54:51.093369 23427 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0613 11:54:51.379018 23427 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
I0613 11:54:51.401377 23427 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
I0613 11:54:51.401411 23427 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
I0613 11:54:51.401484 23427 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
I0613 11:54:51.424520 23427 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
I0613 11:54:51.455927 23427 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
I0613 11:54:51.480588 23427 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
I0613 11:54:51.480642 23427 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
I0613 11:54:51.480708 23427 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
I0613 11:54:51.504710 23427 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
I0613 11:54:51.630996 23427 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
I0613 11:54:51.653265 23427 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
I0613 11:54:51.653305 23427 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
I0613 11:54:51.653374 23427 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
I0613 11:54:51.676893 23427 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I0613 11:54:51.861415 23427 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
I0613 11:54:51.885139 23427 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
I0613 11:54:51.885164 23427 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
I0613 11:54:51.885218 23427 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
I0613 11:54:51.909080 23427 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
I0613 11:54:52.169698 23427 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
I0613 11:54:52.192142 23427 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
I0613 11:54:52.192176 23427 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
I0613 11:54:52.192243 23427 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
I0613 11:54:52.213451 23427 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
I0613 11:54:52.388921 23427 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
I0613 11:54:52.410515 23427 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
I0613 11:54:52.410544 23427 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
I0613 11:54:52.410623 23427 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
I0613 11:54:52.430816 23427 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
I0613 11:54:52.430865 23427 cache_images.go:92] LoadImages completed in 2.711687681s
W0613 11:54:52.430914 23427 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/15003-20351/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
I0613 11:54:52.430988 23427 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0613 11:54:52.480864 23427 cni.go:84] Creating CNI manager for ""
I0613 11:54:52.480881 23427 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0613 11:54:52.480899 23427 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0613 11:54:52.480915 23427 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-779000 NodeName:ingress-addon-legacy-779000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0613 11:54:52.481030 23427 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-779000"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0613 11:54:52.481111 23427 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-779000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-779000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0613 11:54:52.481175 23427 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0613 11:54:52.490418 23427 binaries.go:44] Found k8s binaries, skipping transfer
I0613 11:54:52.490489 23427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0613 11:54:52.499380 23427 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0613 11:54:52.515605 23427 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0613 11:54:52.532083 23427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0613 11:54:52.548559 23427 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0613 11:54:52.553083 23427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0613 11:54:52.564320 23427 certs.go:56] Setting up /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000 for IP: 192.168.49.2
I0613 11:54:52.564339 23427 certs.go:190] acquiring lock for shared ca certs: {Name:mk20811674ea367fa17992256fb23dfacc431c35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0613 11:54:52.564519 23427 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key
I0613 11:54:52.564583 23427 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key
I0613 11:54:52.564634 23427 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/client.key
I0613 11:54:52.564652 23427 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/client.crt with IP's: []
I0613 11:54:52.849299 23427 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/client.crt ...
I0613 11:54:52.849314 23427 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/client.crt: {Name:mk12778a0174bdc1fc09c0d55a6fd7f3d05cd83b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0613 11:54:52.849629 23427 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/client.key ...
I0613 11:54:52.849637 23427 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/client.key: {Name:mk8b6b19d254a0fe5245af650025a25a6b542746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0613 11:54:52.849840 23427 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.key.dd3b5fb2
I0613 11:54:52.849854 23427 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0613 11:54:52.944884 23427 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.crt.dd3b5fb2 ...
I0613 11:54:52.944892 23427 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.crt.dd3b5fb2: {Name:mk19b0fd53d494d349d0be176f5bfefb19d62dce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0613 11:54:52.945154 23427 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.key.dd3b5fb2 ...
I0613 11:54:52.945161 23427 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.key.dd3b5fb2: {Name:mk7e62afc08dc057bfa6dde33979b944dc9d3fd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0613 11:54:52.945382 23427 certs.go:337] copying /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.crt
I0613 11:54:52.945581 23427 certs.go:341] copying /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.key
I0613 11:54:52.945776 23427 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.key
I0613 11:54:52.945788 23427 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.crt with IP's: []
I0613 11:54:53.055872 23427 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.crt ...
I0613 11:54:53.055880 23427 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.crt: {Name:mkf9bc2529b4c3414209a490a1381c41eb01337c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0613 11:54:53.056095 23427 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.key ...
I0613 11:54:53.056103 23427 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.key: {Name:mk29cff19e77fa275e8a69816ee1f8fe0d9310f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0613 11:54:53.056294 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0613 11:54:53.056324 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0613 11:54:53.056345 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0613 11:54:53.056365 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0613 11:54:53.056390 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0613 11:54:53.056416 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0613 11:54:53.056435 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0613 11:54:53.056456 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0613 11:54:53.056552 23427 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem (1338 bytes)
W0613 11:54:53.056618 23427 certs.go:433] ignoring /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800_empty.pem, impossibly tiny 0 bytes
I0613 11:54:53.056630 23427 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca-key.pem (1679 bytes)
I0613 11:54:53.056671 23427 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/ca.pem (1082 bytes)
I0613 11:54:53.056702 23427 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/cert.pem (1123 bytes)
I0613 11:54:53.056743 23427 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/certs/key.pem (1679 bytes)
I0613 11:54:53.056814 23427 certs.go:437] found cert: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem (1708 bytes)
I0613 11:54:53.056848 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem -> /usr/share/ca-certificates/208002.pem
I0613 11:54:53.056870 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0613 11:54:53.056888 23427 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem -> /usr/share/ca-certificates/20800.pem
I0613 11:54:53.057381 23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0613 11:54:53.080830 23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0613 11:54:53.102837 23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0613 11:54:53.124562 23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/profiles/ingress-addon-legacy-779000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0613 11:54:53.146528 23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0613 11:54:53.168334 23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0613 11:54:53.190413 23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0613 11:54:53.212979 23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0613 11:54:53.234813 23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/files/etc/ssl/certs/208002.pem --> /usr/share/ca-certificates/208002.pem (1708 bytes)
I0613 11:54:53.256918 23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0613 11:54:53.278971 23427 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15003-20351/.minikube/certs/20800.pem --> /usr/share/ca-certificates/20800.pem (1338 bytes)
I0613 11:54:53.300956 23427 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0613 11:54:53.317555 23427 ssh_runner.go:195] Run: openssl version
I0613 11:54:53.323676 23427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/208002.pem && ln -fs /usr/share/ca-certificates/208002.pem /etc/ssl/certs/208002.pem"
I0613 11:54:53.333510 23427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/208002.pem
I0613 11:54:53.338094 23427 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 13 18:49 /usr/share/ca-certificates/208002.pem
I0613 11:54:53.338149 23427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/208002.pem
I0613 11:54:53.345327 23427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/208002.pem /etc/ssl/certs/3ec20f2e.0"
I0613 11:54:53.355360 23427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0613 11:54:53.365162 23427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0613 11:54:53.369494 23427 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 13 18:43 /usr/share/ca-certificates/minikubeCA.pem
I0613 11:54:53.369545 23427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0613 11:54:53.376916 23427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0613 11:54:53.386704 23427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20800.pem && ln -fs /usr/share/ca-certificates/20800.pem /etc/ssl/certs/20800.pem"
I0613 11:54:53.396479 23427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20800.pem
I0613 11:54:53.400962 23427 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 13 18:49 /usr/share/ca-certificates/20800.pem
I0613 11:54:53.401010 23427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20800.pem
I0613 11:54:53.407999 23427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20800.pem /etc/ssl/certs/51391683.0"
I0613 11:54:53.417690 23427 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0613 11:54:53.421910 23427 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0613 11:54:53.421960 23427 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-779000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-779000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0613 11:54:53.422053 23427 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0613 11:54:53.442681 23427 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0613 11:54:53.451987 23427 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0613 11:54:53.460922 23427 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0613 11:54:53.460975 23427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0613 11:54:53.469759 23427 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0613 11:54:53.469788 23427 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0613 11:54:53.520794 23427 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0613 11:54:53.520838 23427 kubeadm.go:322] [preflight] Running pre-flight checks
I0613 11:54:53.771934 23427 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0613 11:54:53.772024 23427 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0613 11:54:53.772106 23427 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0613 11:54:53.958924 23427 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0613 11:54:53.959596 23427 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0613 11:54:53.959664 23427 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0613 11:54:54.034320 23427 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0613 11:54:54.078589 23427 out.go:204] - Generating certificates and keys ...
I0613 11:54:54.078678 23427 kubeadm.go:322] [certs] Using existing ca certificate authority
I0613 11:54:54.078765 23427 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0613 11:54:54.497170 23427 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0613 11:54:54.597518 23427 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0613 11:54:54.726338 23427 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0613 11:54:54.775230 23427 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0613 11:54:54.975184 23427 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0613 11:54:54.975295 23427 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-779000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0613 11:54:55.088341 23427 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0613 11:54:55.088469 23427 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-779000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0613 11:54:55.237143 23427 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0613 11:54:55.358543 23427 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0613 11:54:55.577993 23427 kubeadm.go:322] [certs] Generating "sa" key and public key
I0613 11:54:55.578064 23427 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0613 11:54:55.734667 23427 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0613 11:54:55.866241 23427 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0613 11:54:55.996140 23427 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0613 11:54:56.186237 23427 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0613 11:54:56.186642 23427 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0613 11:54:56.208044 23427 out.go:204] - Booting up control plane ...
I0613 11:54:56.208148 23427 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0613 11:54:56.208239 23427 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0613 11:54:56.208348 23427 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0613 11:54:56.208449 23427 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0613 11:54:56.208616 23427 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0613 11:55:36.197751 23427 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0613 11:55:36.198481 23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0613 11:55:36.198735 23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0613 11:55:41.199865 23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0613 11:55:41.200099 23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0613 11:55:51.201726 23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0613 11:55:51.201966 23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0613 11:56:11.204293 23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0613 11:56:11.204508 23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0613 11:56:51.206695 23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0613 11:56:51.206954 23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0613 11:56:51.206974 23427 kubeadm.go:322]
I0613 11:56:51.207013 23427 kubeadm.go:322] Unfortunately, an error has occurred:
I0613 11:56:51.207102 23427 kubeadm.go:322] timed out waiting for the condition
I0613 11:56:51.207126 23427 kubeadm.go:322]
I0613 11:56:51.207163 23427 kubeadm.go:322] This error is likely caused by:
I0613 11:56:51.207191 23427 kubeadm.go:322] - The kubelet is not running
I0613 11:56:51.207308 23427 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0613 11:56:51.207319 23427 kubeadm.go:322]
I0613 11:56:51.207394 23427 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0613 11:56:51.207426 23427 kubeadm.go:322] - 'systemctl status kubelet'
I0613 11:56:51.207453 23427 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0613 11:56:51.207457 23427 kubeadm.go:322]
I0613 11:56:51.207553 23427 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0613 11:56:51.207639 23427 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0613 11:56:51.207651 23427 kubeadm.go:322]
I0613 11:56:51.207720 23427 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0613 11:56:51.207764 23427 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0613 11:56:51.207847 23427 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0613 11:56:51.207873 23427 kubeadm.go:322] - 'docker logs CONTAINERID'
I0613 11:56:51.207881 23427 kubeadm.go:322]
I0613 11:56:51.211036 23427 kubeadm.go:322] W0613 18:54:53.519522 1673 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0613 11:56:51.211195 23427 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0613 11:56:51.211260 23427 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0613 11:56:51.211367 23427 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
I0613 11:56:51.211459 23427 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0613 11:56:51.211558 23427 kubeadm.go:322] W0613 18:54:56.189875 1673 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0613 11:56:51.211662 23427 kubeadm.go:322] W0613 18:54:56.190619 1673 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0613 11:56:51.211728 23427 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0613 11:56:51.211785 23427 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0613 11:56:51.211889 23427 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-779000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-779000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0613 18:54:53.519522 1673 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0613 18:54:56.189875 1673 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0613 18:54:56.190619 1673 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-779000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-779000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0613 18:54:53.519522 1673 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0613 18:54:56.189875 1673 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0613 18:54:56.190619 1673 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0613 11:56:51.211925 23427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0613 11:56:51.626878 23427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0613 11:56:51.638105 23427 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0613 11:56:51.638167 23427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0613 11:56:51.647614 23427 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0613 11:56:51.647649 23427 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0613 11:56:51.697880 23427 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0613 11:56:51.697932 23427 kubeadm.go:322] [preflight] Running pre-flight checks
I0613 11:56:51.940031 23427 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0613 11:56:51.940107 23427 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0613 11:56:51.940183 23427 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0613 11:56:52.122676 23427 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0613 11:56:52.123288 23427 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0613 11:56:52.123321 23427 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0613 11:56:52.193353 23427 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0613 11:56:52.214985 23427 out.go:204] - Generating certificates and keys ...
I0613 11:56:52.215080 23427 kubeadm.go:322] [certs] Using existing ca certificate authority
I0613 11:56:52.215156 23427 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0613 11:56:52.215221 23427 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0613 11:56:52.215282 23427 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0613 11:56:52.215363 23427 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0613 11:56:52.215427 23427 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0613 11:56:52.215481 23427 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0613 11:56:52.215546 23427 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0613 11:56:52.215634 23427 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0613 11:56:52.215711 23427 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0613 11:56:52.215744 23427 kubeadm.go:322] [certs] Using the existing "sa" key
I0613 11:56:52.215782 23427 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0613 11:56:52.326715 23427 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0613 11:56:52.394734 23427 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0613 11:56:52.743050 23427 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0613 11:56:52.868674 23427 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0613 11:56:52.869160 23427 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0613 11:56:52.891067 23427 out.go:204] - Booting up control plane ...
I0613 11:56:52.891277 23427 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0613 11:56:52.891407 23427 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0613 11:56:52.891520 23427 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0613 11:56:52.891679 23427 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0613 11:56:52.891956 23427 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0613 11:57:32.880050 23427 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0613 11:57:32.880882 23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0613 11:57:32.881082 23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0613 11:57:37.883269 23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0613 11:57:37.883483 23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0613 11:57:47.884825 23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0613 11:57:47.885047 23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0613 11:58:07.887411 23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0613 11:58:07.887640 23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0613 11:58:47.890480 23427 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0613 11:58:47.890881 23427 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0613 11:58:47.890896 23427 kubeadm.go:322]
I0613 11:58:47.890952 23427 kubeadm.go:322] Unfortunately, an error has occurred:
I0613 11:58:47.891081 23427 kubeadm.go:322] timed out waiting for the condition
I0613 11:58:47.891101 23427 kubeadm.go:322]
I0613 11:58:47.891181 23427 kubeadm.go:322] This error is likely caused by:
I0613 11:58:47.891256 23427 kubeadm.go:322] - The kubelet is not running
I0613 11:58:47.891487 23427 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0613 11:58:47.891509 23427 kubeadm.go:322]
I0613 11:58:47.891692 23427 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0613 11:58:47.891735 23427 kubeadm.go:322] - 'systemctl status kubelet'
I0613 11:58:47.891788 23427 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0613 11:58:47.891807 23427 kubeadm.go:322]
I0613 11:58:47.891920 23427 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0613 11:58:47.892006 23427 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0613 11:58:47.892014 23427 kubeadm.go:322]
I0613 11:58:47.892159 23427 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0613 11:58:47.892213 23427 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0613 11:58:47.892314 23427 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0613 11:58:47.892341 23427 kubeadm.go:322] - 'docker logs CONTAINERID'
I0613 11:58:47.892346 23427 kubeadm.go:322]
I0613 11:58:47.895480 23427 kubeadm.go:322] W0613 18:56:51.696670 4164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0613 11:58:47.895659 23427 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0613 11:58:47.895731 23427 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0613 11:58:47.895847 23427 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
I0613 11:58:47.895955 23427 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0613 11:58:47.896053 23427 kubeadm.go:322] W0613 18:56:52.872236 4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0613 11:58:47.896147 23427 kubeadm.go:322] W0613 18:56:52.873018 4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0613 11:58:47.896228 23427 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0613 11:58:47.896311 23427 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0613 11:58:47.896340 23427 kubeadm.go:406] StartCluster complete in 3m54.467352234s
I0613 11:58:47.896444 23427 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0613 11:58:47.916198 23427 logs.go:284] 0 containers: []
W0613 11:58:47.916211 23427 logs.go:286] No container was found matching "kube-apiserver"
I0613 11:58:47.916282 23427 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0613 11:58:47.937301 23427 logs.go:284] 0 containers: []
W0613 11:58:47.937316 23427 logs.go:286] No container was found matching "etcd"
I0613 11:58:47.937401 23427 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0613 11:58:47.957426 23427 logs.go:284] 0 containers: []
W0613 11:58:47.957443 23427 logs.go:286] No container was found matching "coredns"
I0613 11:58:47.957514 23427 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0613 11:58:47.977779 23427 logs.go:284] 0 containers: []
W0613 11:58:47.977792 23427 logs.go:286] No container was found matching "kube-scheduler"
I0613 11:58:47.977863 23427 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0613 11:58:47.998090 23427 logs.go:284] 0 containers: []
W0613 11:58:47.998105 23427 logs.go:286] No container was found matching "kube-proxy"
I0613 11:58:47.998169 23427 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0613 11:58:48.018916 23427 logs.go:284] 0 containers: []
W0613 11:58:48.018930 23427 logs.go:286] No container was found matching "kube-controller-manager"
I0613 11:58:48.019006 23427 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0613 11:58:48.039074 23427 logs.go:284] 0 containers: []
W0613 11:58:48.039089 23427 logs.go:286] No container was found matching "kindnet"
I0613 11:58:48.039096 23427 logs.go:123] Gathering logs for container status ...
I0613 11:58:48.039104 23427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0613 11:58:48.093166 23427 logs.go:123] Gathering logs for kubelet ...
I0613 11:58:48.093180 23427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0613 11:58:48.161218 23427 logs.go:123] Gathering logs for dmesg ...
I0613 11:58:48.161235 23427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0613 11:58:48.176606 23427 logs.go:123] Gathering logs for describe nodes ...
I0613 11:58:48.176621 23427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0613 11:58:48.233328 23427 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0613 11:58:48.233346 23427 logs.go:123] Gathering logs for Docker ...
I0613 11:58:48.233353 23427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
W0613 11:58:48.249929 23427 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0613 18:56:51.696670 4164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0613 18:56:52.872236 4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0613 18:56:52.873018 4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0613 11:58:48.249951 23427 out.go:239] *
*
W0613 11:58:48.249991 23427 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0613 18:56:51.696670 4164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0613 18:56:52.872236 4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0613 18:56:52.873018 4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0613 18:56:51.696670 4164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0613 18:56:52.872236 4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0613 18:56:52.873018 4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0613 11:58:48.250006 23427 out.go:239] *
*
W0613 11:58:48.250634 23427 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0613 11:58:48.293328 23427 out.go:177]
W0613 11:58:48.356434 23427 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0613 18:56:51.696670 4164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0613 18:56:52.872236 4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0613 18:56:52.873018 4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0613 18:56:51.696670 4164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0613 18:56:52.872236 4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0613 18:56:52.873018 4164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0613 11:58:48.356523 23427 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0613 11:58:48.356562 23427 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0613 11:58:48.378470 23427 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-779000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (277.69s)