=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-darwin-amd64 start -p ingress-addon-legacy-022000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker
E1003 18:16:53.284484 22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:19:09.439540 22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:19:23.949591 22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:23.955601 22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:23.967197 22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:23.988387 22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:24.029468 22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:24.109703 22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:24.271259 22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:24.593489 22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:25.234579 22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:26.515649 22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:29.076656 22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:34.198463 22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:19:37.126118 22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/addons-431000/client.crt: no such file or directory
E1003 18:19:44.439043 22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:20:04.919496 22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
E1003 18:20:45.880386 22318 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/functional-323000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-022000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m22.614755911s)
-- stdout --
* [ingress-addon-legacy-022000] minikube v1.31.2 on Darwin 14.0
- MINIKUBE_LOCATION=17348
- KUBECONFIG=/Users/jenkins/minikube-integration/17348-21848/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/17348-21848/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node ingress-addon-legacy-022000 in cluster ingress-addon-legacy-022000
* Pulling base image ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
-- /stdout --
** stderr **
I1003 18:16:49.640576 25198 out.go:296] Setting OutFile to fd 1 ...
I1003 18:16:49.640854 25198 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 18:16:49.640859 25198 out.go:309] Setting ErrFile to fd 2...
I1003 18:16:49.640863 25198 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1003 18:16:49.641035 25198 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17348-21848/.minikube/bin
I1003 18:16:49.642501 25198 out.go:303] Setting JSON to false
I1003 18:16:49.664122 25198 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6378,"bootTime":1696375831,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
W1003 18:16:49.664215 25198 start.go:136] gopshost.Virtualization returned error: not implemented yet
I1003 18:16:49.685843 25198 out.go:177] * [ingress-addon-legacy-022000] minikube v1.31.2 on Darwin 14.0
I1003 18:16:49.728685 25198 out.go:177] - MINIKUBE_LOCATION=17348
I1003 18:16:49.728772 25198 notify.go:220] Checking for updates...
I1003 18:16:49.772501 25198 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/17348-21848/kubeconfig
I1003 18:16:49.793707 25198 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I1003 18:16:49.815776 25198 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1003 18:16:49.836593 25198 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17348-21848/.minikube
I1003 18:16:49.857650 25198 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1003 18:16:49.879146 25198 driver.go:373] Setting default libvirt URI to qemu:///system
I1003 18:16:49.936345 25198 docker.go:121] docker version: linux-24.0.6:Docker Desktop 4.24.0 (122432)
I1003 18:16:49.936493 25198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1003 18:16:50.036679 25198 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:65 SystemTime:2023-10-04 01:16:50.02565516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfine
d name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages
Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sco
ut Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
I1003 18:16:50.058543 25198 out.go:177] * Using the docker driver based on user configuration
I1003 18:16:50.079777 25198 start.go:298] selected driver: docker
I1003 18:16:50.079806 25198 start.go:902] validating driver "docker" against <nil>
I1003 18:16:50.079820 25198 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1003 18:16:50.084143 25198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1003 18:16:50.183711 25198 info.go:266] docker info: {ID:bd95ca90-0161-4940-8de1-bb75c87f79bd Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:65 SystemTime:2023-10-04 01:16:50.172652504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:9 KernelVersion:6.4.16-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:13 MemTotal:6227599360 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfin
ed name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2-desktop.5] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.22.0-desktop.2] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manage
s Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.8] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Sc
out Vendor:Docker Inc. Version:v1.0.7]] Warnings:<nil>}}
I1003 18:16:50.183887 25198 start_flags.go:307] no existing cluster config was found, will generate one from the flags
I1003 18:16:50.184071 25198 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1003 18:16:50.205641 25198 out.go:177] * Using Docker Desktop driver with root privileges
I1003 18:16:50.227183 25198 cni.go:84] Creating CNI manager for ""
I1003 18:16:50.227219 25198 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I1003 18:16:50.227231 25198 start_flags.go:321] config:
{Name:ingress-addon-legacy-022000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-022000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I1003 18:16:50.249419 25198 out.go:177] * Starting control plane node ingress-addon-legacy-022000 in cluster ingress-addon-legacy-022000
I1003 18:16:50.292087 25198 cache.go:122] Beginning downloading kic base image for docker with docker
I1003 18:16:50.313359 25198 out.go:177] * Pulling base image ...
I1003 18:16:50.356170 25198 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I1003 18:16:50.356200 25198 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 in local docker daemon
I1003 18:16:50.407315 25198 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 in local docker daemon, skipping pull
I1003 18:16:50.407337 25198 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 exists in daemon, skipping load
I1003 18:16:50.407912 25198 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I1003 18:16:50.407922 25198 cache.go:57] Caching tarball of preloaded images
I1003 18:16:50.408104 25198 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I1003 18:16:50.429359 25198 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I1003 18:16:50.471341 25198 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I1003 18:16:50.557790 25198 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I1003 18:16:55.729731 25198 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I1003 18:16:55.729910 25198 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I1003 18:16:56.352751 25198 cache.go:60] Finished verifying existence of preloaded tar for v1.18.20 on docker
I1003 18:16:56.352989 25198 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/config.json ...
I1003 18:16:56.353013 25198 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/config.json: {Name:mkacce9391d23aa34ae0a7fb95ec37646fa4ab22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1003 18:16:56.353349 25198 cache.go:195] Successfully downloaded all kic artifacts
I1003 18:16:56.353375 25198 start.go:365] acquiring machines lock for ingress-addon-legacy-022000: {Name:mkdafd119bcc1cfcbf80d8d66936b93f4444fb8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1003 18:16:56.353499 25198 start.go:369] acquired machines lock for "ingress-addon-legacy-022000" in 115.682µs
I1003 18:16:56.353519 25198 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-022000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-022000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I1003 18:16:56.353568 25198 start.go:125] createHost starting for "" (driver="docker")
I1003 18:16:56.389030 25198 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I1003 18:16:56.389370 25198 start.go:159] libmachine.API.Create for "ingress-addon-legacy-022000" (driver="docker")
I1003 18:16:56.389435 25198 client.go:168] LocalClient.Create starting
I1003 18:16:56.389597 25198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca.pem
I1003 18:16:56.389674 25198 main.go:141] libmachine: Decoding PEM data...
I1003 18:16:56.389705 25198 main.go:141] libmachine: Parsing certificate...
I1003 18:16:56.389802 25198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/cert.pem
I1003 18:16:56.389866 25198 main.go:141] libmachine: Decoding PEM data...
I1003 18:16:56.389892 25198 main.go:141] libmachine: Parsing certificate...
I1003 18:16:56.390786 25198 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-022000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1003 18:16:56.444838 25198 cli_runner.go:211] docker network inspect ingress-addon-legacy-022000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1003 18:16:56.444945 25198 network_create.go:281] running [docker network inspect ingress-addon-legacy-022000] to gather additional debugging logs...
I1003 18:16:56.444962 25198 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-022000
W1003 18:16:56.496029 25198 cli_runner.go:211] docker network inspect ingress-addon-legacy-022000 returned with exit code 1
I1003 18:16:56.496069 25198 network_create.go:284] error running [docker network inspect ingress-addon-legacy-022000]: docker network inspect ingress-addon-legacy-022000: exit status 1
stdout:
[]
stderr:
Error response from daemon: network ingress-addon-legacy-022000 not found
I1003 18:16:56.496085 25198 network_create.go:286] output of [docker network inspect ingress-addon-legacy-022000]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network ingress-addon-legacy-022000 not found
** /stderr **
I1003 18:16:56.496252 25198 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1003 18:16:56.546975 25198 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0006e4330}
I1003 18:16:56.547010 25198 network_create.go:124] attempt to create docker network ingress-addon-legacy-022000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
I1003 18:16:56.547083 25198 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-022000 ingress-addon-legacy-022000
I1003 18:16:56.633811 25198 network_create.go:108] docker network ingress-addon-legacy-022000 192.168.49.0/24 created
I1003 18:16:56.633863 25198 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-022000" container
I1003 18:16:56.633981 25198 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1003 18:16:56.684937 25198 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-022000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-022000 --label created_by.minikube.sigs.k8s.io=true
I1003 18:16:56.736818 25198 oci.go:103] Successfully created a docker volume ingress-addon-legacy-022000
I1003 18:16:56.736962 25198 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-022000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-022000 --entrypoint /usr/bin/test -v ingress-addon-legacy-022000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 -d /var/lib
I1003 18:16:57.155206 25198 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-022000
I1003 18:16:57.155255 25198 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I1003 18:16:57.155269 25198 kic.go:190] Starting extracting preloaded images to volume ...
I1003 18:16:57.155374 25198 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-022000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 -I lz4 -xf /preloaded.tar -C /extractDir
I1003 18:17:00.087874 25198 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-022000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 -I lz4 -xf /preloaded.tar -C /extractDir: (2.932433397s)
I1003 18:17:00.087895 25198 kic.go:199] duration metric: took 2.932617 seconds to extract preloaded images to volume
I1003 18:17:00.088005 25198 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1003 18:17:00.187276 25198 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-022000 --name ingress-addon-legacy-022000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-022000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-022000 --network ingress-addon-legacy-022000 --ip 192.168.49.2 --volume ingress-addon-legacy-022000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880
I1003 18:17:00.481927 25198 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022000 --format={{.State.Running}}
I1003 18:17:00.536778 25198 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022000 --format={{.State.Status}}
I1003 18:17:00.592055 25198 cli_runner.go:164] Run: docker exec ingress-addon-legacy-022000 stat /var/lib/dpkg/alternatives/iptables
I1003 18:17:00.711591 25198 oci.go:144] the created container "ingress-addon-legacy-022000" has a running status.
I1003 18:17:00.711637 25198 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa...
I1003 18:17:01.179067 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1003 18:17:01.179120 25198 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1003 18:17:01.238055 25198 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022000 --format={{.State.Status}}
I1003 18:17:01.289355 25198 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1003 18:17:01.289374 25198 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-022000 chown docker:docker /home/docker/.ssh/authorized_keys]
I1003 18:17:01.382499 25198 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-022000 --format={{.State.Status}}
I1003 18:17:01.433890 25198 machine.go:88] provisioning docker machine ...
I1003 18:17:01.433932 25198 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-022000"
I1003 18:17:01.434042 25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
I1003 18:17:01.484944 25198 main.go:141] libmachine: Using SSH client type: native
I1003 18:17:01.485281 25198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil> [] 0s} 127.0.0.1 56379 <nil> <nil>}
I1003 18:17:01.485298 25198 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-022000 && echo "ingress-addon-legacy-022000" | sudo tee /etc/hostname
I1003 18:17:01.626273 25198 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-022000
I1003 18:17:01.626370 25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
I1003 18:17:01.678077 25198 main.go:141] libmachine: Using SSH client type: native
I1003 18:17:01.678382 25198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil> [] 0s} 127.0.0.1 56379 <nil> <nil>}
I1003 18:17:01.678396 25198 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-022000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-022000/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-022000' | sudo tee -a /etc/hosts;
fi
fi
I1003 18:17:01.804619 25198 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1003 18:17:01.804652 25198 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/17348-21848/.minikube CaCertPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17348-21848/.minikube}
I1003 18:17:01.804670 25198 ubuntu.go:177] setting up certificates
I1003 18:17:01.804679 25198 provision.go:83] configureAuth start
I1003 18:17:01.804783 25198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-022000
I1003 18:17:01.855832 25198 provision.go:138] copyHostCerts
I1003 18:17:01.855870 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17348-21848/.minikube/cert.pem
I1003 18:17:01.855918 25198 exec_runner.go:144] found /Users/jenkins/minikube-integration/17348-21848/.minikube/cert.pem, removing ...
I1003 18:17:01.855928 25198 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17348-21848/.minikube/cert.pem
I1003 18:17:01.856038 25198 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17348-21848/.minikube/cert.pem (1123 bytes)
I1003 18:17:01.856224 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17348-21848/.minikube/key.pem
I1003 18:17:01.856264 25198 exec_runner.go:144] found /Users/jenkins/minikube-integration/17348-21848/.minikube/key.pem, removing ...
I1003 18:17:01.856268 25198 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17348-21848/.minikube/key.pem
I1003 18:17:01.856366 25198 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17348-21848/.minikube/key.pem (1675 bytes)
I1003 18:17:01.856515 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.pem
I1003 18:17:01.856542 25198 exec_runner.go:144] found /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.pem, removing ...
I1003 18:17:01.856547 25198 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.pem
I1003 18:17:01.856613 25198 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.pem (1078 bytes)
I1003 18:17:01.856748 25198 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-022000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-022000]
I1003 18:17:02.112089 25198 provision.go:172] copyRemoteCerts
I1003 18:17:02.112153 25198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1003 18:17:02.112212 25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
I1003 18:17:02.163237 25198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56379 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa Username:docker}
I1003 18:17:02.256750 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1003 18:17:02.256827 25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1003 18:17:02.279685 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/machines/server.pem -> /etc/docker/server.pem
I1003 18:17:02.279778 25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I1003 18:17:02.302579 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1003 18:17:02.302657 25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1003 18:17:02.325699 25198 provision.go:86] duration metric: configureAuth took 521.003121ms
I1003 18:17:02.325714 25198 ubuntu.go:193] setting minikube options for container-runtime
I1003 18:17:02.325856 25198 config.go:182] Loaded profile config "ingress-addon-legacy-022000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I1003 18:17:02.325922 25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
I1003 18:17:02.377420 25198 main.go:141] libmachine: Using SSH client type: native
I1003 18:17:02.377736 25198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil> [] 0s} 127.0.0.1 56379 <nil> <nil>}
I1003 18:17:02.377756 25198 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1003 18:17:02.504908 25198 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I1003 18:17:02.504924 25198 ubuntu.go:71] root file system type: overlay
I1003 18:17:02.505032 25198 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1003 18:17:02.505143 25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
I1003 18:17:02.556500 25198 main.go:141] libmachine: Using SSH client type: native
I1003 18:17:02.556804 25198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil> [] 0s} 127.0.0.1 56379 <nil> <nil>}
I1003 18:17:02.556870 25198 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1003 18:17:02.694333 25198 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1003 18:17:02.694427 25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
I1003 18:17:02.746526 25198 main.go:141] libmachine: Using SSH client type: native
I1003 18:17:02.746825 25198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f3fc0] 0x13f6ca0 <nil> [] 0s} 127.0.0.1 56379 <nil> <nil>}
I1003 18:17:02.746838 25198 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1003 18:17:03.388283 25198 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-09-04 12:30:15.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-10-04 01:17:02.692023259 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I1003 18:17:03.388310 25198 machine.go:91] provisioned docker machine in 1.954392749s
I1003 18:17:03.388318 25198 client.go:171] LocalClient.Create took 6.998855255s
I1003 18:17:03.388338 25198 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-022000" took 6.998951411s
I1003 18:17:03.388348 25198 start.go:300] post-start starting for "ingress-addon-legacy-022000" (driver="docker")
I1003 18:17:03.388360 25198 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1003 18:17:03.388431 25198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1003 18:17:03.388523 25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
I1003 18:17:03.441765 25198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56379 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa Username:docker}
I1003 18:17:03.538202 25198 ssh_runner.go:195] Run: cat /etc/os-release
I1003 18:17:03.542510 25198 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1003 18:17:03.542534 25198 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1003 18:17:03.542542 25198 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1003 18:17:03.542550 25198 info.go:137] Remote host: Ubuntu 22.04.3 LTS
I1003 18:17:03.542560 25198 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17348-21848/.minikube/addons for local assets ...
I1003 18:17:03.542677 25198 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17348-21848/.minikube/files for local assets ...
I1003 18:17:03.542849 25198 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17348-21848/.minikube/files/etc/ssl/certs/223182.pem -> 223182.pem in /etc/ssl/certs
I1003 18:17:03.542855 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/files/etc/ssl/certs/223182.pem -> /etc/ssl/certs/223182.pem
I1003 18:17:03.543038 25198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1003 18:17:03.552229 25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/files/etc/ssl/certs/223182.pem --> /etc/ssl/certs/223182.pem (1708 bytes)
I1003 18:17:03.575082 25198 start.go:303] post-start completed in 186.723971ms
I1003 18:17:03.575637 25198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-022000
I1003 18:17:03.627434 25198 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/config.json ...
I1003 18:17:03.627904 25198 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1003 18:17:03.627961 25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
I1003 18:17:03.679095 25198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56379 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa Username:docker}
I1003 18:17:03.768348 25198 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1003 18:17:03.773784 25198 start.go:128] duration metric: createHost completed in 7.420174026s
I1003 18:17:03.773804 25198 start.go:83] releasing machines lock for "ingress-addon-legacy-022000", held for 7.420277748s
I1003 18:17:03.773880 25198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-022000
I1003 18:17:03.825420 25198 ssh_runner.go:195] Run: cat /version.json
I1003 18:17:03.825443 25198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1003 18:17:03.825509 25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
I1003 18:17:03.825519 25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
I1003 18:17:03.878755 25198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56379 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa Username:docker}
I1003 18:17:03.878759 25198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56379 SSHKeyPath:/Users/jenkins/minikube-integration/17348-21848/.minikube/machines/ingress-addon-legacy-022000/id_rsa Username:docker}
I1003 18:17:04.069520 25198 ssh_runner.go:195] Run: systemctl --version
I1003 18:17:04.075074 25198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1003 18:17:04.080502 25198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1003 18:17:04.105738 25198 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1003 18:17:04.105817 25198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I1003 18:17:04.123251 25198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I1003 18:17:04.140326 25198 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1003 18:17:04.140340 25198 start.go:469] detecting cgroup driver to use...
I1003 18:17:04.140353 25198 detect.go:196] detected "cgroupfs" cgroup driver on host os
I1003 18:17:04.140509 25198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1003 18:17:04.156840 25198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I1003 18:17:04.167414 25198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1003 18:17:04.177797 25198 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I1003 18:17:04.177858 25198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1003 18:17:04.188315 25198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1003 18:17:04.198902 25198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1003 18:17:04.209581 25198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1003 18:17:04.220102 25198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1003 18:17:04.229998 25198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1003 18:17:04.240729 25198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1003 18:17:04.249874 25198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1003 18:17:04.258905 25198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1003 18:17:04.317201 25198 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1003 18:17:04.397916 25198 start.go:469] detecting cgroup driver to use...
I1003 18:17:04.397935 25198 detect.go:196] detected "cgroupfs" cgroup driver on host os
I1003 18:17:04.398013 25198 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1003 18:17:04.410188 25198 cruntime.go:277] skipping containerd shutdown because we are bound to it
I1003 18:17:04.410253 25198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1003 18:17:04.422474 25198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I1003 18:17:04.440434 25198 ssh_runner.go:195] Run: which cri-dockerd
I1003 18:17:04.445362 25198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1003 18:17:04.456330 25198 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I1003 18:17:04.501480 25198 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1003 18:17:04.596125 25198 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1003 18:17:04.656388 25198 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
I1003 18:17:04.679730 25198 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1003 18:17:04.701351 25198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1003 18:17:04.763557 25198 ssh_runner.go:195] Run: sudo systemctl restart docker
I1003 18:17:05.030253 25198 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1003 18:17:05.055842 25198 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1003 18:17:05.103486 25198 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.6 ...
I1003 18:17:05.103634 25198 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-022000 dig +short host.docker.internal
I1003 18:17:05.226091 25198 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
I1003 18:17:05.226184 25198 ssh_runner.go:195] Run: grep 192.168.65.254 host.minikube.internal$ /etc/hosts
I1003 18:17:05.231326 25198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1003 18:17:05.243309 25198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-022000
I1003 18:17:05.295581 25198 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I1003 18:17:05.295665 25198 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1003 18:17:05.316480 25198 docker.go:664] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I1003 18:17:05.316507 25198 docker.go:670] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I1003 18:17:05.316569 25198 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I1003 18:17:05.326203 25198 ssh_runner.go:195] Run: which lz4
I1003 18:17:05.330780 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I1003 18:17:05.330916 25198 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1003 18:17:05.335291 25198 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1003 18:17:05.335314 25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
I1003 18:17:10.778726 25198 docker.go:628] Took 5.447849 seconds to copy over tarball
I1003 18:17:10.778806 25198 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I1003 18:17:12.773852 25198 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.995022473s)
I1003 18:17:12.773867 25198 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1003 18:17:12.827513 25198 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I1003 18:17:12.837135 25198 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
I1003 18:17:12.854050 25198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1003 18:17:12.907226 25198 ssh_runner.go:195] Run: sudo systemctl restart docker
I1003 18:17:14.017184 25198 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.109928566s)
I1003 18:17:14.017285 25198 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1003 18:17:14.037452 25198 docker.go:664] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I1003 18:17:14.037468 25198 docker.go:670] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I1003 18:17:14.037482 25198 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
I1003 18:17:14.043566 25198 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
I1003 18:17:14.043615 25198 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I1003 18:17:14.043747 25198 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
I1003 18:17:14.043812 25198 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
I1003 18:17:14.044151 25198 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
I1003 18:17:14.044424 25198 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I1003 18:17:14.044645 25198 image.go:134] retrieving image: registry.k8s.io/pause:3.2
I1003 18:17:14.044699 25198 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
I1003 18:17:14.049489 25198 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I1003 18:17:14.050669 25198 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I1003 18:17:14.051048 25198 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
I1003 18:17:14.051089 25198 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
I1003 18:17:14.051350 25198 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
I1003 18:17:14.051371 25198 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
I1003 18:17:14.054013 25198 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
I1003 18:17:14.054181 25198 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
I1003 18:17:14.708734 25198 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
I1003 18:17:14.729631 25198 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
I1003 18:17:14.729666 25198 docker.go:317] Removing image: registry.k8s.io/etcd:3.4.3-0
I1003 18:17:14.729734 25198 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
I1003 18:17:14.750236 25198 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I1003 18:17:15.184770 25198 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
I1003 18:17:15.205615 25198 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
I1003 18:17:15.205641 25198 docker.go:317] Removing image: registry.k8s.io/kube-proxy:v1.18.20
I1003 18:17:15.205690 25198 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
I1003 18:17:15.227400 25198 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
I1003 18:17:15.254313 25198 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I1003 18:17:15.491521 25198 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
I1003 18:17:15.512091 25198 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
I1003 18:17:15.512117 25198 docker.go:317] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
I1003 18:17:15.512166 25198 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
I1003 18:17:15.533031 25198 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
I1003 18:17:15.803671 25198 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
I1003 18:17:15.824616 25198 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
I1003 18:17:15.824642 25198 docker.go:317] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
I1003 18:17:15.824699 25198 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
I1003 18:17:15.844939 25198 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
I1003 18:17:16.098696 25198 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
I1003 18:17:16.120109 25198 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
I1003 18:17:16.120134 25198 docker.go:317] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
I1003 18:17:16.120189 25198 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
I1003 18:17:16.140909 25198 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
I1003 18:17:16.416800 25198 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
I1003 18:17:16.437931 25198 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I1003 18:17:16.437971 25198 docker.go:317] Removing image: registry.k8s.io/pause:3.2
I1003 18:17:16.438025 25198 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
I1003 18:17:16.458630 25198 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
I1003 18:17:16.754937 25198 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
I1003 18:17:16.776851 25198 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
I1003 18:17:16.776877 25198 docker.go:317] Removing image: registry.k8s.io/coredns:1.6.7
I1003 18:17:16.776944 25198 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
I1003 18:17:16.796897 25198 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
I1003 18:17:16.796939 25198 cache_images.go:92] LoadImages completed in 2.759434173s
W1003 18:17:16.796986 25198 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17348-21848/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
I1003 18:17:16.797062 25198 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1003 18:17:16.851199 25198 cni.go:84] Creating CNI manager for ""
I1003 18:17:16.851215 25198 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I1003 18:17:16.851230 25198 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1003 18:17:16.851247 25198 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-022000 NodeName:ingress-addon-legacy-022000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I1003 18:17:16.851364 25198 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-022000"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1003 18:17:16.851433 25198 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-022000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-022000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1003 18:17:16.851503 25198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I1003 18:17:16.861230 25198 binaries.go:44] Found k8s binaries, skipping transfer
I1003 18:17:16.861283 25198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1003 18:17:16.870695 25198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I1003 18:17:16.887971 25198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I1003 18:17:16.905789 25198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I1003 18:17:16.922989 25198 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1003 18:17:16.927754 25198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1003 18:17:16.939430 25198 certs.go:56] Setting up /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000 for IP: 192.168.49.2
I1003 18:17:16.939453 25198 certs.go:190] acquiring lock for shared ca certs: {Name:mkadefe5d54c46ee473565278d437df4894e94b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1003 18:17:16.939639 25198 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.key
I1003 18:17:16.939697 25198 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17348-21848/.minikube/proxy-client-ca.key
I1003 18:17:16.939746 25198 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/client.key
I1003 18:17:16.939759 25198 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/client.crt with IP's: []
I1003 18:17:16.978781 25198 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/client.crt ...
I1003 18:17:16.978794 25198 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/client.crt: {Name:mk6f567f53ff613362aaff9d5ce6fe5f16cdaf75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1003 18:17:16.979136 25198 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/client.key ...
I1003 18:17:16.979145 25198 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/client.key: {Name:mk522dc078d0ee34b187de1b6bdfd0a1d23e4c87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1003 18:17:16.979388 25198 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.key.dd3b5fb2
I1003 18:17:16.979404 25198 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I1003 18:17:17.085662 25198 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.crt.dd3b5fb2 ...
I1003 18:17:17.085671 25198 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.crt.dd3b5fb2: {Name:mkb4f2e32dfb3a804841d143820286dd7389e5ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1003 18:17:17.085922 25198 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.key.dd3b5fb2 ...
I1003 18:17:17.085930 25198 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.key.dd3b5fb2: {Name:mka2a24b441d60e88bb44c5006adb92431c10e8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1003 18:17:17.086124 25198 certs.go:337] copying /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.crt
I1003 18:17:17.086306 25198 certs.go:341] copying /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.key
I1003 18:17:17.086483 25198 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.key
I1003 18:17:17.086496 25198 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.crt with IP's: []
I1003 18:17:17.192561 25198 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.crt ...
I1003 18:17:17.192570 25198 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.crt: {Name:mkc14c9e12b1678a9d4c0469dc5082d5ecda6bf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1003 18:17:17.192809 25198 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.key ...
I1003 18:17:17.192817 25198 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.key: {Name:mk624f3883ac4fd328415fd64824fc4487304d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1003 18:17:17.193020 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1003 18:17:17.193046 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1003 18:17:17.193063 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1003 18:17:17.193078 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1003 18:17:17.193098 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1003 18:17:17.193114 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1003 18:17:17.193129 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1003 18:17:17.193151 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1003 18:17:17.193238 25198 certs.go:437] found cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/22318.pem (1338 bytes)
W1003 18:17:17.193287 25198 certs.go:433] ignoring /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/22318_empty.pem, impossibly tiny 0 bytes
I1003 18:17:17.193296 25198 certs.go:437] found cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca-key.pem (1675 bytes)
I1003 18:17:17.193327 25198 certs.go:437] found cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/ca.pem (1078 bytes)
I1003 18:17:17.193355 25198 certs.go:437] found cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/cert.pem (1123 bytes)
I1003 18:17:17.193391 25198 certs.go:437] found cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/Users/jenkins/minikube-integration/17348-21848/.minikube/certs/key.pem (1675 bytes)
I1003 18:17:17.193459 25198 certs.go:437] found cert: /Users/jenkins/minikube-integration/17348-21848/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17348-21848/.minikube/files/etc/ssl/certs/223182.pem (1708 bytes)
I1003 18:17:17.193490 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1003 18:17:17.193515 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/22318.pem -> /usr/share/ca-certificates/22318.pem
I1003 18:17:17.193532 25198 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17348-21848/.minikube/files/etc/ssl/certs/223182.pem -> /usr/share/ca-certificates/223182.pem
I1003 18:17:17.193989 25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1003 18:17:17.217741 25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1003 18:17:17.240463 25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1003 18:17:17.263741 25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/profiles/ingress-addon-legacy-022000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1003 18:17:17.286942 25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1003 18:17:17.310574 25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1003 18:17:17.333484 25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1003 18:17:17.356921 25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1003 18:17:17.380381 25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1003 18:17:17.403862 25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/certs/22318.pem --> /usr/share/ca-certificates/22318.pem (1338 bytes)
I1003 18:17:17.426663 25198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17348-21848/.minikube/files/etc/ssl/certs/223182.pem --> /usr/share/ca-certificates/223182.pem (1708 bytes)
I1003 18:17:17.449699 25198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1003 18:17:17.467348 25198 ssh_runner.go:195] Run: openssl version
I1003 18:17:17.473670 25198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1003 18:17:17.483877 25198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1003 18:17:17.488355 25198 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 4 01:07 /usr/share/ca-certificates/minikubeCA.pem
I1003 18:17:17.488402 25198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1003 18:17:17.495418 25198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1003 18:17:17.505756 25198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22318.pem && ln -fs /usr/share/ca-certificates/22318.pem /etc/ssl/certs/22318.pem"
I1003 18:17:17.515862 25198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22318.pem
I1003 18:17:17.520502 25198 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 4 01:12 /usr/share/ca-certificates/22318.pem
I1003 18:17:17.520564 25198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22318.pem
I1003 18:17:17.527505 25198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22318.pem /etc/ssl/certs/51391683.0"
I1003 18:17:17.537448 25198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/223182.pem && ln -fs /usr/share/ca-certificates/223182.pem /etc/ssl/certs/223182.pem"
I1003 18:17:17.547644 25198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/223182.pem
I1003 18:17:17.552476 25198 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 4 01:12 /usr/share/ca-certificates/223182.pem
I1003 18:17:17.552532 25198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/223182.pem
I1003 18:17:17.559469 25198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/223182.pem /etc/ssl/certs/3ec20f2e.0"
I1003 18:17:17.569492 25198 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I1003 18:17:17.574117 25198 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I1003 18:17:17.574162 25198 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-022000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-022000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
I1003 18:17:17.574257 25198 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1003 18:17:17.594128 25198 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1003 18:17:17.604116 25198 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1003 18:17:17.613560 25198 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I1003 18:17:17.613615 25198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1003 18:17:17.622881 25198 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1003 18:17:17.622913 25198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1003 18:17:17.674961 25198 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I1003 18:17:17.675020 25198 kubeadm.go:322] [preflight] Running pre-flight checks
I1003 18:17:17.924971 25198 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I1003 18:17:17.925058 25198 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1003 18:17:17.925161 25198 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1003 18:17:18.112344 25198 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1003 18:17:18.113210 25198 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1003 18:17:18.113243 25198 kubeadm.go:322] [kubelet-start] Starting the kubelet
I1003 18:17:18.200658 25198 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1003 18:17:18.222337 25198 out.go:204] - Generating certificates and keys ...
I1003 18:17:18.222414 25198 kubeadm.go:322] [certs] Using existing ca certificate authority
I1003 18:17:18.222474 25198 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I1003 18:17:18.540190 25198 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I1003 18:17:18.635670 25198 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I1003 18:17:18.762271 25198 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I1003 18:17:18.973246 25198 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I1003 18:17:19.042381 25198 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I1003 18:17:19.042515 25198 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-022000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1003 18:17:19.096707 25198 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I1003 18:17:19.096844 25198 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-022000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1003 18:17:19.155292 25198 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I1003 18:17:19.323478 25198 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I1003 18:17:19.410712 25198 kubeadm.go:322] [certs] Generating "sa" key and public key
I1003 18:17:19.410821 25198 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1003 18:17:19.610306 25198 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I1003 18:17:19.705894 25198 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1003 18:17:19.925086 25198 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1003 18:17:20.131727 25198 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1003 18:17:20.132236 25198 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1003 18:17:20.153782 25198 out.go:204] - Booting up control plane ...
I1003 18:17:20.153954 25198 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1003 18:17:20.154082 25198 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1003 18:17:20.154204 25198 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1003 18:17:20.154360 25198 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1003 18:17:20.154607 25198 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1003 18:18:00.141467 25198 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I1003 18:18:00.142944 25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1003 18:18:00.143172 25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1003 18:18:05.144150 25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1003 18:18:05.144384 25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1003 18:18:15.146101 25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1003 18:18:15.146331 25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1003 18:18:35.147365 25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1003 18:18:35.147604 25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1003 18:19:15.150240 25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1003 18:19:15.150580 25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1003 18:19:15.150604 25198 kubeadm.go:322]
I1003 18:19:15.150680 25198 kubeadm.go:322] Unfortunately, an error has occurred:
I1003 18:19:15.150748 25198 kubeadm.go:322] timed out waiting for the condition
I1003 18:19:15.150759 25198 kubeadm.go:322]
I1003 18:19:15.150792 25198 kubeadm.go:322] This error is likely caused by:
I1003 18:19:15.150823 25198 kubeadm.go:322] - The kubelet is not running
I1003 18:19:15.150961 25198 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1003 18:19:15.150982 25198 kubeadm.go:322]
I1003 18:19:15.151220 25198 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1003 18:19:15.151296 25198 kubeadm.go:322] - 'systemctl status kubelet'
I1003 18:19:15.151401 25198 kubeadm.go:322] - 'journalctl -xeu kubelet'
I1003 18:19:15.151419 25198 kubeadm.go:322]
I1003 18:19:15.151539 25198 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I1003 18:19:15.151639 25198 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1003 18:19:15.151650 25198 kubeadm.go:322]
I1003 18:19:15.151722 25198 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I1003 18:19:15.151768 25198 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I1003 18:19:15.151835 25198 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I1003 18:19:15.151865 25198 kubeadm.go:322] - 'docker logs CONTAINERID'
I1003 18:19:15.151873 25198 kubeadm.go:322]
I1003 18:19:15.154040 25198 kubeadm.go:322] W1004 01:17:17.674141 1707 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I1003 18:19:15.154256 25198 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I1003 18:19:15.154326 25198 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I1003 18:19:15.154442 25198 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
I1003 18:19:15.154532 25198 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1003 18:19:15.154625 25198 kubeadm.go:322] W1004 01:17:20.136615 1707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1003 18:19:15.154713 25198 kubeadm.go:322] W1004 01:17:20.137673 1707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1003 18:19:15.154775 25198 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I1003 18:19:15.154850 25198 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W1003 18:19:15.154921 25198 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-022000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-022000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1004 01:17:17.674141 1707 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1004 01:17:20.136615 1707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1004 01:17:20.137673 1707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-022000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-022000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1004 01:17:17.674141 1707 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1004 01:17:20.136615 1707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1004 01:17:20.137673 1707 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I1003 18:19:15.154955 25198 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I1003 18:19:15.571581 25198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1003 18:19:15.583548 25198 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I1003 18:19:15.583602 25198 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1003 18:19:15.592956 25198 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1003 18:19:15.592981 25198 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1003 18:19:15.645954 25198 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I1003 18:19:15.646023 25198 kubeadm.go:322] [preflight] Running pre-flight checks
I1003 18:19:15.899579 25198 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I1003 18:19:15.899685 25198 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1003 18:19:15.899762 25198 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1003 18:19:16.085606 25198 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1003 18:19:16.086432 25198 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1003 18:19:16.086492 25198 kubeadm.go:322] [kubelet-start] Starting the kubelet
I1003 18:19:16.178086 25198 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1003 18:19:16.199695 25198 out.go:204] - Generating certificates and keys ...
I1003 18:19:16.199758 25198 kubeadm.go:322] [certs] Using existing ca certificate authority
I1003 18:19:16.199822 25198 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I1003 18:19:16.199875 25198 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1003 18:19:16.199942 25198 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I1003 18:19:16.200016 25198 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I1003 18:19:16.200094 25198 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I1003 18:19:16.200190 25198 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I1003 18:19:16.200277 25198 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I1003 18:19:16.200368 25198 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1003 18:19:16.200437 25198 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1003 18:19:16.200471 25198 kubeadm.go:322] [certs] Using the existing "sa" key
I1003 18:19:16.200524 25198 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1003 18:19:16.332251 25198 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I1003 18:19:16.390500 25198 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1003 18:19:16.515656 25198 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1003 18:19:16.659205 25198 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1003 18:19:16.659704 25198 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1003 18:19:16.681517 25198 out.go:204] - Booting up control plane ...
I1003 18:19:16.681670 25198 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1003 18:19:16.681825 25198 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1003 18:19:16.681963 25198 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1003 18:19:16.682112 25198 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1003 18:19:16.682440 25198 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1003 18:19:56.669885 25198 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I1003 18:19:56.670502 25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1003 18:19:56.670761 25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1003 18:20:01.671493 25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1003 18:20:01.671744 25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1003 18:20:11.673163 25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1003 18:20:11.673396 25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1003 18:20:31.673795 25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1003 18:20:31.673951 25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1003 18:21:11.676813 25198 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I1003 18:21:11.677089 25198 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I1003 18:21:11.677102 25198 kubeadm.go:322]
I1003 18:21:11.677143 25198 kubeadm.go:322] Unfortunately, an error has occurred:
I1003 18:21:11.677208 25198 kubeadm.go:322] timed out waiting for the condition
I1003 18:21:11.677240 25198 kubeadm.go:322]
I1003 18:21:11.677310 25198 kubeadm.go:322] This error is likely caused by:
I1003 18:21:11.677363 25198 kubeadm.go:322] - The kubelet is not running
I1003 18:21:11.677541 25198 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1003 18:21:11.677555 25198 kubeadm.go:322]
I1003 18:21:11.677664 25198 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1003 18:21:11.677712 25198 kubeadm.go:322] - 'systemctl status kubelet'
I1003 18:21:11.677781 25198 kubeadm.go:322] - 'journalctl -xeu kubelet'
I1003 18:21:11.677807 25198 kubeadm.go:322]
I1003 18:21:11.677933 25198 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I1003 18:21:11.678031 25198 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I1003 18:21:11.678051 25198 kubeadm.go:322]
I1003 18:21:11.678139 25198 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I1003 18:21:11.678200 25198 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I1003 18:21:11.678275 25198 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I1003 18:21:11.678304 25198 kubeadm.go:322] - 'docker logs CONTAINERID'
I1003 18:21:11.678308 25198 kubeadm.go:322]
I1003 18:21:11.680008 25198 kubeadm.go:322] W1004 01:19:15.645248 4782 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I1003 18:21:11.680160 25198 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I1003 18:21:11.680221 25198 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I1003 18:21:11.680326 25198 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
I1003 18:21:11.680401 25198 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1003 18:21:11.680498 25198 kubeadm.go:322] W1004 01:19:16.664459 4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1003 18:21:11.680607 25198 kubeadm.go:322] W1004 01:19:16.665222 4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I1003 18:21:11.680671 25198 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I1003 18:21:11.680754 25198 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I1003 18:21:11.680785 25198 kubeadm.go:406] StartCluster complete in 3m54.10430272s
I1003 18:21:11.680876 25198 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1003 18:21:11.700816 25198 logs.go:284] 0 containers: []
W1003 18:21:11.700832 25198 logs.go:286] No container was found matching "kube-apiserver"
I1003 18:21:11.700909 25198 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1003 18:21:11.720611 25198 logs.go:284] 0 containers: []
W1003 18:21:11.720623 25198 logs.go:286] No container was found matching "etcd"
I1003 18:21:11.720689 25198 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1003 18:21:11.741371 25198 logs.go:284] 0 containers: []
W1003 18:21:11.741384 25198 logs.go:286] No container was found matching "coredns"
I1003 18:21:11.741452 25198 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1003 18:21:11.761584 25198 logs.go:284] 0 containers: []
W1003 18:21:11.761598 25198 logs.go:286] No container was found matching "kube-scheduler"
I1003 18:21:11.761673 25198 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1003 18:21:11.782175 25198 logs.go:284] 0 containers: []
W1003 18:21:11.782188 25198 logs.go:286] No container was found matching "kube-proxy"
I1003 18:21:11.782268 25198 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1003 18:21:11.802707 25198 logs.go:284] 0 containers: []
W1003 18:21:11.802723 25198 logs.go:286] No container was found matching "kube-controller-manager"
I1003 18:21:11.802831 25198 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I1003 18:21:11.823031 25198 logs.go:284] 0 containers: []
W1003 18:21:11.823046 25198 logs.go:286] No container was found matching "kindnet"
I1003 18:21:11.823061 25198 logs.go:123] Gathering logs for kubelet ...
I1003 18:21:11.823069 25198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1003 18:21:11.861029 25198 logs.go:123] Gathering logs for dmesg ...
I1003 18:21:11.861043 25198 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1003 18:21:11.875127 25198 logs.go:123] Gathering logs for describe nodes ...
I1003 18:21:11.875141 25198 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1003 18:21:11.931996 25198 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1003 18:21:11.932019 25198 logs.go:123] Gathering logs for Docker ...
I1003 18:21:11.932025 25198 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I1003 18:21:11.949079 25198 logs.go:123] Gathering logs for container status ...
I1003 18:21:11.949093 25198 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1003 18:21:12.004355 25198 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1004 01:19:15.645248 4782 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1004 01:19:16.664459 4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1004 01:19:16.665222 4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W1003 18:21:12.004377 25198 out.go:239] *
*
W1003 18:21:12.004431 25198 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1004 01:19:15.645248 4782 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1004 01:19:16.664459 4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1004 01:19:16.665222 4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1004 01:19:15.645248 4782 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1004 01:19:16.664459 4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1004 01:19:16.665222 4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W1003 18:21:12.004457 25198 out.go:239] *
*
W1003 18:21:12.005081 25198 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1003 18:21:12.069742 25198 out.go:177]
W1003 18:21:12.111781 25198 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1004 01:19:15.645248 4782 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1004 01:19:16.664459 4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1004 01:19:16.665222 4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W1004 01:19:15.645248 4782 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W1004 01:19:16.664459 4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W1004 01:19:16.665222 4782 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W1003 18:21:12.111823 25198 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1003 18:21:12.111847 25198 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I1003 18:21:12.153760 25198 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-022000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (262.66s)