=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-linux-amd64 start -p ingress-addon-legacy-988248 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker --container-runtime=docker
E0216 16:51:33.934467 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 16:52:55.856413 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 16:54:26.472112 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:26.477437 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:26.487823 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:26.508193 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:26.548566 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:26.628984 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:26.789483 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:27.110244 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:27.751158 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:29.031770 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:31.593620 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:36.714395 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:54:46.955443 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:55:07.436614 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:55:12.011491 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 16:55:39.697165 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/addons-500129/client.crt: no such file or directory
E0216 16:55:48.397752 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:57:10.319163 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
E0216 16:59:26.472389 13619 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/functional-361824/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ingress-addon-legacy-988248 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker --container-runtime=docker: exit status 109 (8m38.52254508s)
-- stdout --
* [ingress-addon-legacy-988248] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=17936
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting control plane node ingress-addon-legacy-988248 in cluster ingress-addon-legacy-988248
* Pulling base image v0.0.42-1708008208-17936 ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
X Problems detected in kubelet:
Feb 16 16:59:01 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:01.531525 5718 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-988248_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
Feb 16 16:59:01 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:01.532621 5718 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-988248_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
Feb 16 16:59:07 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:07.526485 5718 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-988248_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
-- /stdout --
** stderr **
I0216 16:50:55.622203 67305 out.go:291] Setting OutFile to fd 1 ...
I0216 16:50:55.622685 67305 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:50:55.622700 67305 out.go:304] Setting ErrFile to fd 2...
I0216 16:50:55.622709 67305 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0216 16:50:55.623142 67305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17936-6821/.minikube/bin
I0216 16:50:55.624200 67305 out.go:298] Setting JSON to false
I0216 16:50:55.625357 67305 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2002,"bootTime":1708100254,"procs":416,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0216 16:50:55.625421 67305 start.go:139] virtualization: kvm guest
I0216 16:50:55.627566 67305 out.go:177] * [ingress-addon-legacy-988248] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
I0216 16:50:55.628951 67305 out.go:177] - MINIKUBE_LOCATION=17936
I0216 16:50:55.628983 67305 notify.go:220] Checking for updates...
I0216 16:50:55.630195 67305 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0216 16:50:55.631463 67305 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/17936-6821/kubeconfig
I0216 16:50:55.632975 67305 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/17936-6821/.minikube
I0216 16:50:55.634361 67305 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0216 16:50:55.635637 67305 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0216 16:50:55.637066 67305 driver.go:392] Setting default libvirt URI to qemu:///system
I0216 16:50:55.660386 67305 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
I0216 16:50:55.660498 67305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0216 16:50:55.714099 67305 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2024-02-16 16:50:55.702525693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0216 16:50:55.714202 67305 docker.go:295] overlay module found
I0216 16:50:55.715869 67305 out.go:177] * Using the docker driver based on user configuration
I0216 16:50:55.717201 67305 start.go:299] selected driver: docker
I0216 16:50:55.717223 67305 start.go:903] validating driver "docker" against <nil>
I0216 16:50:55.717237 67305 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0216 16:50:55.718243 67305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0216 16:50:55.770105 67305 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2024-02-16 16:50:55.760742037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648001024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0216 16:50:55.770265 67305 start_flags.go:309] no existing cluster config was found, will generate one from the flags
I0216 16:50:55.770468 67305 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0216 16:50:55.772019 67305 out.go:177] * Using Docker driver with root privileges
I0216 16:50:55.773375 67305 cni.go:84] Creating CNI manager for ""
I0216 16:50:55.773417 67305 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0216 16:50:55.773433 67305 start_flags.go:323] config:
{Name:ingress-addon-legacy-988248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-988248 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0216 16:50:55.774814 67305 out.go:177] * Starting control plane node ingress-addon-legacy-988248 in cluster ingress-addon-legacy-988248
I0216 16:50:55.776051 67305 cache.go:121] Beginning downloading kic base image for docker with docker
I0216 16:50:55.777418 67305 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
I0216 16:50:55.778656 67305 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0216 16:50:55.778772 67305 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
I0216 16:50:55.794808 67305 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
I0216 16:50:55.794845 67305 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
I0216 16:50:55.879522 67305 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0216 16:50:55.879555 67305 cache.go:56] Caching tarball of preloaded images
I0216 16:50:55.879766 67305 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0216 16:50:55.881669 67305 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0216 16:50:55.882964 67305 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0216 16:50:55.985756 67305 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0216 16:51:07.502110 67305 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0216 16:51:07.502206 67305 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0216 16:51:08.354473 67305 cache.go:59] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0216 16:51:08.354841 67305 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/config.json ...
I0216 16:51:08.354873 67305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/config.json: {Name:mk98312a6968118c75080ccc2134599c6af7c4ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0216 16:51:08.355039 67305 cache.go:194] Successfully downloaded all kic artifacts
I0216 16:51:08.355064 67305 start.go:365] acquiring machines lock for ingress-addon-legacy-988248: {Name:mk3ecd0f6305afd0e654759010df5c333f00ace4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0216 16:51:08.355106 67305 start.go:369] acquired machines lock for "ingress-addon-legacy-988248" in 31.045µs
I0216 16:51:08.355123 67305 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-988248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-988248 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0216 16:51:08.355203 67305 start.go:125] createHost starting for "" (driver="docker")
I0216 16:51:08.357356 67305 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0216 16:51:08.357561 67305 start.go:159] libmachine.API.Create for "ingress-addon-legacy-988248" (driver="docker")
I0216 16:51:08.357584 67305 client.go:168] LocalClient.Create starting
I0216 16:51:08.357680 67305 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem
I0216 16:51:08.357715 67305 main.go:141] libmachine: Decoding PEM data...
I0216 16:51:08.357728 67305 main.go:141] libmachine: Parsing certificate...
I0216 16:51:08.357775 67305 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem
I0216 16:51:08.357795 67305 main.go:141] libmachine: Decoding PEM data...
I0216 16:51:08.357803 67305 main.go:141] libmachine: Parsing certificate...
I0216 16:51:08.358096 67305 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-988248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0216 16:51:08.373724 67305 cli_runner.go:211] docker network inspect ingress-addon-legacy-988248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0216 16:51:08.373806 67305 network_create.go:281] running [docker network inspect ingress-addon-legacy-988248] to gather additional debugging logs...
I0216 16:51:08.373832 67305 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-988248
W0216 16:51:08.389222 67305 cli_runner.go:211] docker network inspect ingress-addon-legacy-988248 returned with exit code 1
I0216 16:51:08.389263 67305 network_create.go:284] error running [docker network inspect ingress-addon-legacy-988248]: docker network inspect ingress-addon-legacy-988248: exit status 1
stdout:
[]
stderr:
Error response from daemon: network ingress-addon-legacy-988248 not found
I0216 16:51:08.389279 67305 network_create.go:286] output of [docker network inspect ingress-addon-legacy-988248]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network ingress-addon-legacy-988248 not found
** /stderr **
I0216 16:51:08.389414 67305 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0216 16:51:08.406580 67305 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0027c1a80}
I0216 16:51:08.406622 67305 network_create.go:124] attempt to create docker network ingress-addon-legacy-988248 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0216 16:51:08.406668 67305 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-988248 ingress-addon-legacy-988248
I0216 16:51:08.466854 67305 network_create.go:108] docker network ingress-addon-legacy-988248 192.168.49.0/24 created
I0216 16:51:08.466889 67305 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-988248" container
I0216 16:51:08.466957 67305 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0216 16:51:08.482302 67305 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-988248 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-988248 --label created_by.minikube.sigs.k8s.io=true
I0216 16:51:08.498526 67305 oci.go:103] Successfully created a docker volume ingress-addon-legacy-988248
I0216 16:51:08.498675 67305 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-988248-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-988248 --entrypoint /usr/bin/test -v ingress-addon-legacy-988248:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
I0216 16:51:10.011341 67305 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-988248-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-988248 --entrypoint /usr/bin/test -v ingress-addon-legacy-988248:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib: (1.512603265s)
I0216 16:51:10.011371 67305 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-988248
I0216 16:51:10.011393 67305 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0216 16:51:10.011417 67305 kic.go:194] Starting extracting preloaded images to volume ...
I0216 16:51:10.011494 67305 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-988248:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
I0216 16:51:14.755162 67305 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-988248:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (4.743612668s)
I0216 16:51:14.755198 67305 kic.go:203] duration metric: took 4.743778 seconds to extract preloaded images to volume
W0216 16:51:14.755334 67305 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0216 16:51:14.755445 67305 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0216 16:51:14.807343 67305 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-988248 --name ingress-addon-legacy-988248 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-988248 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-988248 --network ingress-addon-legacy-988248 --ip 192.168.49.2 --volume ingress-addon-legacy-988248:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
I0216 16:51:15.092814 67305 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988248 --format={{.State.Running}}
I0216 16:51:15.111016 67305 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988248 --format={{.State.Status}}
I0216 16:51:15.128407 67305 cli_runner.go:164] Run: docker exec ingress-addon-legacy-988248 stat /var/lib/dpkg/alternatives/iptables
I0216 16:51:15.170453 67305 oci.go:144] the created container "ingress-addon-legacy-988248" has a running status.
I0216 16:51:15.170492 67305 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa...
I0216 16:51:15.244092 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0216 16:51:15.244138 67305 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0216 16:51:15.263254 67305 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988248 --format={{.State.Status}}
I0216 16:51:15.279182 67305 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0216 16:51:15.279202 67305 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-988248 chown docker:docker /home/docker/.ssh/authorized_keys]
I0216 16:51:15.317437 67305 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-988248 --format={{.State.Status}}
I0216 16:51:15.337076 67305 machine.go:88] provisioning docker machine ...
I0216 16:51:15.337109 67305 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-988248"
I0216 16:51:15.337165 67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
I0216 16:51:15.352984 67305 main.go:141] libmachine: Using SSH client type: native
I0216 16:51:15.353344 67305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil> [] 0s} 127.0.0.1 32792 <nil> <nil>}
I0216 16:51:15.353361 67305 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-988248 && echo "ingress-addon-legacy-988248" | sudo tee /etc/hostname
I0216 16:51:15.353973 67305 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37462->127.0.0.1:32792: read: connection reset by peer
I0216 16:51:18.498423 67305 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-988248
I0216 16:51:18.498493 67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
I0216 16:51:18.515375 67305 main.go:141] libmachine: Using SSH client type: native
I0216 16:51:18.515698 67305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil> [] 0s} 127.0.0.1 32792 <nil> <nil>}
I0216 16:51:18.515718 67305 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-988248' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-988248/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-988248' | sudo tee -a /etc/hosts;
fi
fi
I0216 16:51:18.644271 67305 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0216 16:51:18.644304 67305 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17936-6821/.minikube CaCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17936-6821/.minikube}
I0216 16:51:18.644349 67305 ubuntu.go:177] setting up certificates
I0216 16:51:18.644370 67305 provision.go:83] configureAuth start
I0216 16:51:18.644423 67305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-988248
I0216 16:51:18.660770 67305 provision.go:138] copyHostCerts
I0216 16:51:18.660805 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem
I0216 16:51:18.660831 67305 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem, removing ...
I0216 16:51:18.660837 67305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem
I0216 16:51:18.660901 67305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/ca.pem (1082 bytes)
I0216 16:51:18.660972 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem
I0216 16:51:18.660990 67305 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem, removing ...
I0216 16:51:18.661007 67305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem
I0216 16:51:18.661039 67305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/cert.pem (1123 bytes)
I0216 16:51:18.661102 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem
I0216 16:51:18.661119 67305 exec_runner.go:144] found /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem, removing ...
I0216 16:51:18.661123 67305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem
I0216 16:51:18.661143 67305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17936-6821/.minikube/key.pem (1679 bytes)
I0216 16:51:18.661187 67305 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-988248 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-988248]
I0216 16:51:18.813075 67305 provision.go:172] copyRemoteCerts
I0216 16:51:18.813130 67305 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0216 16:51:18.813161 67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
I0216 16:51:18.829531 67305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa Username:docker}
I0216 16:51:18.924594 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0216 16:51:18.924670 67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0216 16:51:18.947206 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem -> /etc/docker/server.pem
I0216 16:51:18.947274 67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0216 16:51:18.969774 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0216 16:51:18.969836 67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0216 16:51:18.992146 67305 provision.go:86] duration metric: configureAuth took 347.762118ms
I0216 16:51:18.992190 67305 ubuntu.go:193] setting minikube options for container-runtime
I0216 16:51:18.992374 67305 config.go:182] Loaded profile config "ingress-addon-legacy-988248": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0216 16:51:18.992421 67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
I0216 16:51:19.009095 67305 main.go:141] libmachine: Using SSH client type: native
I0216 16:51:19.009471 67305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil> [] 0s} 127.0.0.1 32792 <nil> <nil>}
I0216 16:51:19.009485 67305 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0216 16:51:19.140570 67305 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0216 16:51:19.140608 67305 ubuntu.go:71] root file system type: overlay
I0216 16:51:19.140735 67305 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0216 16:51:19.140799 67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
I0216 16:51:19.157311 67305 main.go:141] libmachine: Using SSH client type: native
I0216 16:51:19.157645 67305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil> [] 0s} 127.0.0.1 32792 <nil> <nil>}
I0216 16:51:19.157706 67305 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0216 16:51:19.298575 67305 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0216 16:51:19.298645 67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
I0216 16:51:19.315887 67305 main.go:141] libmachine: Using SSH client type: native
I0216 16:51:19.316297 67305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil> [] 0s} 127.0.0.1 32792 <nil> <nil>}
I0216 16:51:19.316317 67305 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0216 16:51:19.994337 67305 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-02-06 21:12:51.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-02-16 16:51:19.292587221 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0216 16:51:19.994371 67305 machine.go:91] provisioned docker machine in 4.657273017s
I0216 16:51:19.994384 67305 client.go:171] LocalClient.Create took 11.636792969s
I0216 16:51:19.994401 67305 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-988248" took 11.636839631s
I0216 16:51:19.994408 67305 start.go:300] post-start starting for "ingress-addon-legacy-988248" (driver="docker")
I0216 16:51:19.994417 67305 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0216 16:51:19.994461 67305 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0216 16:51:19.994496 67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
I0216 16:51:20.011197 67305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa Username:docker}
I0216 16:51:20.105286 67305 ssh_runner.go:195] Run: cat /etc/os-release
I0216 16:51:20.108460 67305 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0216 16:51:20.108499 67305 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0216 16:51:20.108511 67305 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0216 16:51:20.108521 67305 info.go:137] Remote host: Ubuntu 22.04.3 LTS
I0216 16:51:20.108542 67305 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-6821/.minikube/addons for local assets ...
I0216 16:51:20.108598 67305 filesync.go:126] Scanning /home/jenkins/minikube-integration/17936-6821/.minikube/files for local assets ...
I0216 16:51:20.108728 67305 filesync.go:149] local asset: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem -> 136192.pem in /etc/ssl/certs
I0216 16:51:20.108743 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem -> /etc/ssl/certs/136192.pem
I0216 16:51:20.108854 67305 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0216 16:51:20.116938 67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem --> /etc/ssl/certs/136192.pem (1708 bytes)
I0216 16:51:20.138510 67305 start.go:303] post-start completed in 144.091536ms
I0216 16:51:20.138864 67305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-988248
I0216 16:51:20.154681 67305 profile.go:148] Saving config to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/config.json ...
I0216 16:51:20.155011 67305 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0216 16:51:20.155068 67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
I0216 16:51:20.170852 67305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa Username:docker}
I0216 16:51:20.260774 67305 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0216 16:51:20.264902 67305 start.go:128] duration metric: createHost completed in 11.909688255s
I0216 16:51:20.264925 67305 start.go:83] releasing machines lock for "ingress-addon-legacy-988248", held for 11.909809331s
I0216 16:51:20.264977 67305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-988248
I0216 16:51:20.280983 67305 ssh_runner.go:195] Run: cat /version.json
I0216 16:51:20.281034 67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
I0216 16:51:20.281082 67305 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0216 16:51:20.281149 67305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-988248
I0216 16:51:20.298368 67305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa Username:docker}
I0216 16:51:20.298754 67305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17936-6821/.minikube/machines/ingress-addon-legacy-988248/id_rsa Username:docker}
I0216 16:51:20.387536 67305 ssh_runner.go:195] Run: systemctl --version
I0216 16:51:20.478143 67305 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0216 16:51:20.482446 67305 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0216 16:51:20.504315 67305 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0216 16:51:20.504384 67305 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0216 16:51:20.519602 67305 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0216 16:51:20.534747 67305 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0216 16:51:20.534785 67305 start.go:475] detecting cgroup driver to use...
I0216 16:51:20.534821 67305 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0216 16:51:20.534938 67305 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0216 16:51:20.549856 67305 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0216 16:51:20.559177 67305 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0216 16:51:20.568130 67305 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0216 16:51:20.568233 67305 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0216 16:51:20.577369 67305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0216 16:51:20.586323 67305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0216 16:51:20.595267 67305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0216 16:51:20.604611 67305 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0216 16:51:20.612748 67305 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0216 16:51:20.621445 67305 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0216 16:51:20.629198 67305 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0216 16:51:20.636962 67305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0216 16:51:20.709750 67305 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0216 16:51:20.793247 67305 start.go:475] detecting cgroup driver to use...
I0216 16:51:20.793298 67305 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0216 16:51:20.793349 67305 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0216 16:51:20.804441 67305 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0216 16:51:20.804503 67305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0216 16:51:20.815382 67305 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0216 16:51:20.831255 67305 ssh_runner.go:195] Run: which cri-dockerd
I0216 16:51:20.834415 67305 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0216 16:51:20.843176 67305 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0216 16:51:20.861068 67305 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0216 16:51:20.968587 67305 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0216 16:51:21.049973 67305 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0216 16:51:21.050100 67305 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0216 16:51:21.066463 67305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0216 16:51:21.136579 67305 ssh_runner.go:195] Run: sudo systemctl restart docker
I0216 16:51:21.366158 67305 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0216 16:51:21.387632 67305 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0216 16:51:21.414900 67305 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
I0216 16:51:21.414996 67305 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-988248 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0216 16:51:21.433039 67305 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0216 16:51:21.436925 67305 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0216 16:51:21.447308 67305 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0216 16:51:21.447379 67305 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0216 16:51:21.465592 67305 docker.go:685] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0216 16:51:21.465637 67305 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0216 16:51:21.465687 67305 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0216 16:51:21.473676 67305 ssh_runner.go:195] Run: which lz4
I0216 16:51:21.476774 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0216 16:51:21.476869 67305 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0216 16:51:21.479921 67305 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0216 16:51:21.479950 67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
I0216 16:51:22.297868 67305 docker.go:649] Took 0.821034 seconds to copy over tarball
I0216 16:51:22.297929 67305 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0216 16:51:24.354361 67305 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.056409131s)
I0216 16:51:24.354388 67305 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0216 16:51:24.416476 67305 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0216 16:51:24.426121 67305 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
I0216 16:51:24.445221 67305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0216 16:51:24.521911 67305 ssh_runner.go:195] Run: sudo systemctl restart docker
I0216 16:51:27.132947 67305 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.610998812s)
I0216 16:51:27.133017 67305 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0216 16:51:27.150880 67305 docker.go:685] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0216 16:51:27.150902 67305 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0216 16:51:27.150910 67305 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
I0216 16:51:27.152268 67305 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
I0216 16:51:27.152298 67305 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
I0216 16:51:27.152308 67305 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I0216 16:51:27.152340 67305 image.go:134] retrieving image: registry.k8s.io/pause:3.2
I0216 16:51:27.152269 67305 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
I0216 16:51:27.152297 67305 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0216 16:51:27.152463 67305 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
I0216 16:51:27.152522 67305 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
I0216 16:51:27.153173 67305 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
I0216 16:51:27.153281 67305 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
I0216 16:51:27.153296 67305 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
I0216 16:51:27.153296 67305 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
I0216 16:51:27.153308 67305 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
I0216 16:51:27.153336 67305 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0216 16:51:27.153363 67305 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
I0216 16:51:27.153285 67305 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I0216 16:51:27.358011 67305 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
I0216 16:51:27.370246 67305 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
I0216 16:51:27.375864 67305 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
I0216 16:51:27.375917 67305 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
I0216 16:51:27.375957 67305 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
I0216 16:51:27.388762 67305 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0216 16:51:27.388813 67305 docker.go:337] Removing image: registry.k8s.io/pause:3.2
I0216 16:51:27.388859 67305 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
I0216 16:51:27.393579 67305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I0216 16:51:27.408270 67305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
I0216 16:51:27.415310 67305 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
I0216 16:51:27.435039 67305 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
I0216 16:51:27.435084 67305 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
I0216 16:51:27.435119 67305 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
I0216 16:51:27.452312 67305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
I0216 16:51:27.522242 67305 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
I0216 16:51:27.525091 67305 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
I0216 16:51:27.527170 67305 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
I0216 16:51:27.527191 67305 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
I0216 16:51:27.543210 67305 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
I0216 16:51:27.543258 67305 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
I0216 16:51:27.543307 67305 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
I0216 16:51:27.544948 67305 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
I0216 16:51:27.545001 67305 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
I0216 16:51:27.545041 67305 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
I0216 16:51:27.549983 67305 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
I0216 16:51:27.550020 67305 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
I0216 16:51:27.550027 67305 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
I0216 16:51:27.550040 67305 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
I0216 16:51:27.550072 67305 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
I0216 16:51:27.550076 67305 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
I0216 16:51:27.562758 67305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
I0216 16:51:27.564255 67305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
I0216 16:51:27.568571 67305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
I0216 16:51:27.569668 67305 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
I0216 16:51:27.997741 67305 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0216 16:51:28.015355 67305 cache_images.go:92] LoadImages completed in 864.431051ms
W0216 16:51:28.015452 67305 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17936-6821/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
I0216 16:51:28.015524 67305 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0216 16:51:28.064031 67305 cni.go:84] Creating CNI manager for ""
I0216 16:51:28.064055 67305 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0216 16:51:28.064068 67305 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0216 16:51:28.064084 67305 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-988248 NodeName:ingress-addon-legacy-988248 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0216 16:51:28.064240 67305 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-988248"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0216 16:51:28.064306 67305 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-988248 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-988248 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0216 16:51:28.064358 67305 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0216 16:51:28.072871 67305 binaries.go:44] Found k8s binaries, skipping transfer
I0216 16:51:28.072960 67305 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0216 16:51:28.081499 67305 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0216 16:51:28.096813 67305 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0216 16:51:28.112291 67305 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0216 16:51:28.128483 67305 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0216 16:51:28.131815 67305 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0216 16:51:28.142539 67305 certs.go:56] Setting up /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248 for IP: 192.168.49.2
I0216 16:51:28.142579 67305 certs.go:190] acquiring lock for shared ca certs: {Name:mk9d742a64083da672505a071544cb22b9fe542d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0216 16:51:28.142731 67305 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.key
I0216 16:51:28.142793 67305 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.key
I0216 16:51:28.142857 67305 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/client.key
I0216 16:51:28.142874 67305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/client.crt with IP's: []
I0216 16:51:28.238957 67305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/client.crt ...
I0216 16:51:28.238995 67305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/client.crt: {Name:mk38e6f23e3ecbd1fa8e0f54e1c8bcc52a30609c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0216 16:51:28.239183 67305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/client.key ...
I0216 16:51:28.239201 67305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/client.key: {Name:mk9a1cc2fd946429c955c6e35a175fe1c94bbc03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0216 16:51:28.239308 67305 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.key.dd3b5fb2
I0216 16:51:28.239327 67305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0216 16:51:28.372995 67305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.crt.dd3b5fb2 ...
I0216 16:51:28.373027 67305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.crt.dd3b5fb2: {Name:mk73a2ac8fec36249407b47a716b224e9495eb84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0216 16:51:28.373217 67305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.key.dd3b5fb2 ...
I0216 16:51:28.373240 67305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.key.dd3b5fb2: {Name:mk53ca1d3d89356c3c4adceada67f4311abde604 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0216 16:51:28.373335 67305 certs.go:337] copying /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.crt
I0216 16:51:28.373432 67305 certs.go:341] copying /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.key
I0216 16:51:28.373520 67305 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.key
I0216 16:51:28.373541 67305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.crt with IP's: []
I0216 16:51:28.491484 67305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.crt ...
I0216 16:51:28.491521 67305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.crt: {Name:mkd95586f34fea099603623625b9eb1f83dece71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0216 16:51:28.491727 67305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.key ...
I0216 16:51:28.491748 67305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.key: {Name:mk6fa872ea15a338d34e8a60cd2cd3081654123c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0216 16:51:28.491853 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0216 16:51:28.491880 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0216 16:51:28.491899 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0216 16:51:28.491918 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0216 16:51:28.491940 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0216 16:51:28.491963 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0216 16:51:28.491985 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0216 16:51:28.492009 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0216 16:51:28.492085 67305 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619.pem (1338 bytes)
W0216 16:51:28.492142 67305 certs.go:433] ignoring /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619_empty.pem, impossibly tiny 0 bytes
I0216 16:51:28.492176 67305 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca-key.pem (1675 bytes)
I0216 16:51:28.492225 67305 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/ca.pem (1082 bytes)
I0216 16:51:28.492265 67305 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/cert.pem (1123 bytes)
I0216 16:51:28.492304 67305 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/home/jenkins/minikube-integration/17936-6821/.minikube/certs/key.pem (1679 bytes)
I0216 16:51:28.492375 67305 certs.go:437] found cert: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem (1708 bytes)
I0216 16:51:28.492422 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0216 16:51:28.492451 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619.pem -> /usr/share/ca-certificates/13619.pem
I0216 16:51:28.492473 67305 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem -> /usr/share/ca-certificates/136192.pem
I0216 16:51:28.493059 67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0216 16:51:28.515946 67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0216 16:51:28.537561 67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0216 16:51:28.559344 67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/profiles/ingress-addon-legacy-988248/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0216 16:51:28.580562 67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0216 16:51:28.601788 67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0216 16:51:28.622954 67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0216 16:51:28.644619 67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0216 16:51:28.666619 67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0216 16:51:28.688978 67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/certs/13619.pem --> /usr/share/ca-certificates/13619.pem (1338 bytes)
I0216 16:51:28.710724 67305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17936-6821/.minikube/files/etc/ssl/certs/136192.pem --> /usr/share/ca-certificates/136192.pem (1708 bytes)
I0216 16:51:28.732115 67305 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0216 16:51:28.748121 67305 ssh_runner.go:195] Run: openssl version
I0216 16:51:28.753129 67305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0216 16:51:28.761505 67305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0216 16:51:28.764814 67305 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 16 16:43 /usr/share/ca-certificates/minikubeCA.pem
I0216 16:51:28.764862 67305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0216 16:51:28.771458 67305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0216 16:51:28.779958 67305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13619.pem && ln -fs /usr/share/ca-certificates/13619.pem /etc/ssl/certs/13619.pem"
I0216 16:51:28.788471 67305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13619.pem
I0216 16:51:28.791656 67305 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 16 16:47 /usr/share/ca-certificates/13619.pem
I0216 16:51:28.791716 67305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13619.pem
I0216 16:51:28.798054 67305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13619.pem /etc/ssl/certs/51391683.0"
I0216 16:51:28.806779 67305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136192.pem && ln -fs /usr/share/ca-certificates/136192.pem /etc/ssl/certs/136192.pem"
I0216 16:51:28.815520 67305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136192.pem
I0216 16:51:28.818673 67305 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 16 16:47 /usr/share/ca-certificates/136192.pem
I0216 16:51:28.818745 67305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136192.pem
I0216 16:51:28.825110 67305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136192.pem /etc/ssl/certs/3ec20f2e.0"
I0216 16:51:28.833862 67305 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0216 16:51:28.836992 67305 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0216 16:51:28.837035 67305 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-988248 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-988248 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0216 16:51:28.837142 67305 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0216 16:51:28.853402 67305 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0216 16:51:28.861330 67305 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0216 16:51:28.869251 67305 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0216 16:51:28.869302 67305 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0216 16:51:28.877038 67305 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0216 16:51:28.877087 67305 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0216 16:51:28.919141 67305 kubeadm.go:322] W0216 16:51:28.918543 1839 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0216 16:51:29.030848 67305 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0216 16:51:29.079459 67305 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
I0216 16:51:29.079721 67305 kubeadm.go:322] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
I0216 16:51:29.145914 67305 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0216 16:51:31.548534 67305 kubeadm.go:322] W0216 16:51:31.548201 1839 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0216 16:51:31.549498 67305 kubeadm.go:322] W0216 16:51:31.549264 1839 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0216 16:55:31.553802 67305 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0216 16:55:31.553901 67305 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0216 16:55:31.556558 67305 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0216 16:55:31.556665 67305 kubeadm.go:322] [preflight] Running pre-flight checks
I0216 16:55:31.556769 67305 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
I0216 16:55:31.556859 67305 kubeadm.go:322] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1051-gcp[0m
I0216 16:55:31.556927 67305 kubeadm.go:322] [0;37mDOCKER_VERSION[0m: [0;32m25.0.3[0m
I0216 16:55:31.556983 67305 kubeadm.go:322] [0;37mOS[0m: [0;32mLinux[0m
I0216 16:55:31.557052 67305 kubeadm.go:322] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0216 16:55:31.557113 67305 kubeadm.go:322] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0216 16:55:31.557154 67305 kubeadm.go:322] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0216 16:55:31.557195 67305 kubeadm.go:322] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0216 16:55:31.557244 67305 kubeadm.go:322] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0216 16:55:31.557284 67305 kubeadm.go:322] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0216 16:55:31.557346 67305 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0216 16:55:31.557423 67305 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0216 16:55:31.557522 67305 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0216 16:55:31.557649 67305 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0216 16:55:31.557727 67305 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0216 16:55:31.557782 67305 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0216 16:55:31.557859 67305 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0216 16:55:31.559977 67305 out.go:204] - Generating certificates and keys ...
I0216 16:55:31.560054 67305 kubeadm.go:322] [certs] Using existing ca certificate authority
I0216 16:55:31.560108 67305 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0216 16:55:31.560193 67305 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0216 16:55:31.560244 67305 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0216 16:55:31.560304 67305 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0216 16:55:31.560345 67305 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0216 16:55:31.560411 67305 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0216 16:55:31.560569 67305 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-988248 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0216 16:55:31.560639 67305 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0216 16:55:31.560756 67305 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-988248 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0216 16:55:31.560819 67305 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0216 16:55:31.560889 67305 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0216 16:55:31.560930 67305 kubeadm.go:322] [certs] Generating "sa" key and public key
I0216 16:55:31.560988 67305 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0216 16:55:31.561038 67305 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0216 16:55:31.561084 67305 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0216 16:55:31.561138 67305 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0216 16:55:31.561183 67305 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0216 16:55:31.561246 67305 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0216 16:55:31.563286 67305 out.go:204] - Booting up control plane ...
I0216 16:55:31.563375 67305 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0216 16:55:31.563465 67305 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0216 16:55:31.563545 67305 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0216 16:55:31.563637 67305 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0216 16:55:31.563770 67305 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0216 16:55:31.563814 67305 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0216 16:55:31.563819 67305 kubeadm.go:322]
I0216 16:55:31.563853 67305 kubeadm.go:322] Unfortunately, an error has occurred:
I0216 16:55:31.563887 67305 kubeadm.go:322] timed out waiting for the condition
I0216 16:55:31.563893 67305 kubeadm.go:322]
I0216 16:55:31.563924 67305 kubeadm.go:322] This error is likely caused by:
I0216 16:55:31.563958 67305 kubeadm.go:322] - The kubelet is not running
I0216 16:55:31.564067 67305 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0216 16:55:31.564078 67305 kubeadm.go:322]
I0216 16:55:31.564203 67305 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0216 16:55:31.564236 67305 kubeadm.go:322] - 'systemctl status kubelet'
I0216 16:55:31.564316 67305 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0216 16:55:31.564329 67305 kubeadm.go:322]
I0216 16:55:31.564412 67305 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0216 16:55:31.564491 67305 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0216 16:55:31.564497 67305 kubeadm.go:322]
I0216 16:55:31.564568 67305 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0216 16:55:31.564637 67305 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0216 16:55:31.564703 67305 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0216 16:55:31.564740 67305 kubeadm.go:322] - 'docker logs CONTAINERID'
I0216 16:55:31.564770 67305 kubeadm.go:322]
W0216 16:55:31.564922 67305 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1051-gcp[0m
[0;37mDOCKER_VERSION[0m: [0;32m25.0.3[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-988248 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-988248 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0216 16:51:28.918543 1839 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0216 16:51:31.548201 1839 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0216 16:51:31.549264 1839 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1051-gcp[0m
[0;37mDOCKER_VERSION[0m: [0;32m25.0.3[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-988248 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-988248 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0216 16:51:28.918543 1839 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0216 16:51:31.548201 1839 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0216 16:51:31.549264 1839 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0216 16:55:31.565007 67305 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0216 16:55:32.315641 67305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0216 16:55:32.327490 67305 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0216 16:55:32.327549 67305 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0216 16:55:32.336388 67305 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0216 16:55:32.336432 67305 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0216 16:55:32.382152 67305 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0216 16:55:32.382246 67305 kubeadm.go:322] [preflight] Running pre-flight checks
I0216 16:55:32.565644 67305 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
I0216 16:55:32.565730 67305 kubeadm.go:322] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1051-gcp[0m
I0216 16:55:32.565791 67305 kubeadm.go:322] [0;37mDOCKER_VERSION[0m: [0;32m25.0.3[0m
I0216 16:55:32.565837 67305 kubeadm.go:322] [0;37mOS[0m: [0;32mLinux[0m
I0216 16:55:32.565887 67305 kubeadm.go:322] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0216 16:55:32.565974 67305 kubeadm.go:322] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0216 16:55:32.566053 67305 kubeadm.go:322] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0216 16:55:32.566122 67305 kubeadm.go:322] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0216 16:55:32.566197 67305 kubeadm.go:322] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0216 16:55:32.566249 67305 kubeadm.go:322] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0216 16:55:32.636815 67305 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0216 16:55:32.636907 67305 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0216 16:55:32.637021 67305 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0216 16:55:32.816068 67305 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0216 16:55:32.817160 67305 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0216 16:55:32.817203 67305 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0216 16:55:32.900970 67305 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0216 16:55:32.904834 67305 out.go:204] - Generating certificates and keys ...
I0216 16:55:32.904961 67305 kubeadm.go:322] [certs] Using existing ca certificate authority
I0216 16:55:32.905088 67305 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0216 16:55:32.905205 67305 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0216 16:55:32.905299 67305 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0216 16:55:32.905393 67305 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0216 16:55:32.905466 67305 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0216 16:55:32.905560 67305 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0216 16:55:32.905662 67305 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0216 16:55:32.905771 67305 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0216 16:55:32.906051 67305 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0216 16:55:32.906103 67305 kubeadm.go:322] [certs] Using the existing "sa" key
I0216 16:55:32.906203 67305 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0216 16:55:33.127868 67305 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0216 16:55:33.303029 67305 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0216 16:55:33.359183 67305 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0216 16:55:33.742670 67305 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0216 16:55:33.743221 67305 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0216 16:55:33.745360 67305 out.go:204] - Booting up control plane ...
I0216 16:55:33.745464 67305 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0216 16:55:33.749040 67305 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0216 16:55:33.751004 67305 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0216 16:55:33.751530 67305 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0216 16:55:33.753499 67305 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0216 16:56:13.753952 67305 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0216 16:59:33.754911 67305 kubeadm.go:322]
I0216 16:59:33.755011 67305 kubeadm.go:322] Unfortunately, an error has occurred:
I0216 16:59:33.755067 67305 kubeadm.go:322] timed out waiting for the condition
I0216 16:59:33.755076 67305 kubeadm.go:322]
I0216 16:59:33.755124 67305 kubeadm.go:322] This error is likely caused by:
I0216 16:59:33.755171 67305 kubeadm.go:322] - The kubelet is not running
I0216 16:59:33.755313 67305 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0216 16:59:33.755360 67305 kubeadm.go:322]
I0216 16:59:33.755515 67305 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0216 16:59:33.755561 67305 kubeadm.go:322] - 'systemctl status kubelet'
I0216 16:59:33.755613 67305 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0216 16:59:33.755623 67305 kubeadm.go:322]
I0216 16:59:33.755762 67305 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0216 16:59:33.755880 67305 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0216 16:59:33.755891 67305 kubeadm.go:322]
I0216 16:59:33.755991 67305 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0216 16:59:33.756063 67305 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0216 16:59:33.756178 67305 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0216 16:59:33.756238 67305 kubeadm.go:322] - 'docker logs CONTAINERID'
I0216 16:59:33.756248 67305 kubeadm.go:322]
I0216 16:59:33.758003 67305 kubeadm.go:322] W0216 16:55:32.381558 5490 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0216 16:59:33.758237 67305 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0216 16:59:33.758413 67305 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
I0216 16:59:33.758685 67305 kubeadm.go:322] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
I0216 16:59:33.758797 67305 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0216 16:59:33.758962 67305 kubeadm.go:322] W0216 16:55:33.748682 5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0216 16:59:33.759123 67305 kubeadm.go:322] W0216 16:55:33.750669 5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0216 16:59:33.759240 67305 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0216 16:59:33.759427 67305 kubeadm.go:406] StartCluster complete in 8m4.922391988s
I0216 16:59:33.759456 67305 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0216 16:59:33.759527 67305 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0216 16:59:33.778294 67305 logs.go:276] 0 containers: []
W0216 16:59:33.778320 67305 logs.go:278] No container was found matching "kube-apiserver"
I0216 16:59:33.778380 67305 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0216 16:59:33.796212 67305 logs.go:276] 0 containers: []
W0216 16:59:33.796241 67305 logs.go:278] No container was found matching "etcd"
I0216 16:59:33.796292 67305 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0216 16:59:33.813696 67305 logs.go:276] 0 containers: []
W0216 16:59:33.813722 67305 logs.go:278] No container was found matching "coredns"
I0216 16:59:33.813769 67305 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0216 16:59:33.832412 67305 logs.go:276] 0 containers: []
W0216 16:59:33.832437 67305 logs.go:278] No container was found matching "kube-scheduler"
I0216 16:59:33.832481 67305 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0216 16:59:33.850975 67305 logs.go:276] 0 containers: []
W0216 16:59:33.850997 67305 logs.go:278] No container was found matching "kube-proxy"
I0216 16:59:33.851048 67305 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0216 16:59:33.868626 67305 logs.go:276] 0 containers: []
W0216 16:59:33.868652 67305 logs.go:278] No container was found matching "kube-controller-manager"
I0216 16:59:33.868707 67305 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0216 16:59:33.886353 67305 logs.go:276] 0 containers: []
W0216 16:59:33.886381 67305 logs.go:278] No container was found matching "kindnet"
I0216 16:59:33.886392 67305 logs.go:123] Gathering logs for kubelet ...
I0216 16:59:33.886403 67305 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0216 16:59:33.907062 67305 logs.go:138] Found kubelet problem: Feb 16 16:59:01 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:01.531525 5718 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-988248_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
W0216 16:59:33.907226 67305 logs.go:138] Found kubelet problem: Feb 16 16:59:01 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:01.532621 5718 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-988248_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
W0216 16:59:33.913677 67305 logs.go:138] Found kubelet problem: Feb 16 16:59:07 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:07.526485 5718 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-988248_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
W0216 16:59:33.917372 67305 logs.go:138] Found kubelet problem: Feb 16 16:59:11 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:11.525019 5718 pod_workers.go:191] Error syncing pod 6aefdd7d4cb77909c7f85262968986ab ("etcd-ingress-addon-legacy-988248_kube-system(6aefdd7d4cb77909c7f85262968986ab)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
W0216 16:59:33.918583 67305 logs.go:138] Found kubelet problem: Feb 16 16:59:12 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:12.525511 5718 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-988248_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
W0216 16:59:33.921422 67305 logs.go:138] Found kubelet problem: Feb 16 16:59:15 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:15.525958 5718 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-988248_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
W0216 16:59:33.925270 67305 logs.go:138] Found kubelet problem: Feb 16 16:59:19 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:19.525531 5718 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-988248_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
W0216 16:59:33.929555 67305 logs.go:138] Found kubelet problem: Feb 16 16:59:24 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:24.526142 5718 pod_workers.go:191] Error syncing pod 6aefdd7d4cb77909c7f85262968986ab ("etcd-ingress-addon-legacy-988248_kube-system(6aefdd7d4cb77909c7f85262968986ab)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
W0216 16:59:33.931166 67305 logs.go:138] Found kubelet problem: Feb 16 16:59:25 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:25.524555 5718 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-988248_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
W0216 16:59:33.933725 67305 logs.go:138] Found kubelet problem: Feb 16 16:59:28 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:28.525498 5718 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-988248_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
I0216 16:59:33.937737 67305 logs.go:123] Gathering logs for dmesg ...
I0216 16:59:33.937763 67305 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0216 16:59:33.955746 67305 logs.go:123] Gathering logs for describe nodes ...
I0216 16:59:33.955782 67305 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0216 16:59:34.013814 67305 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0216 16:59:34.013848 67305 logs.go:123] Gathering logs for Docker ...
I0216 16:59:34.013859 67305 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0216 16:59:34.032858 67305 logs.go:123] Gathering logs for container status ...
I0216 16:59:34.032892 67305 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0216 16:59:34.069969 67305 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1051-gcp[0m
[0;37mDOCKER_VERSION[0m: [0;32m25.0.3[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0216 16:55:32.381558 5490 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0216 16:55:33.748682 5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0216 16:55:33.750669 5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0216 16:59:34.070020 67305 out.go:239] *
*
W0216 16:59:34.070082 67305 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1051-gcp[0m
[0;37mDOCKER_VERSION[0m: [0;32m25.0.3[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0216 16:55:32.381558 5490 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0216 16:55:33.748682 5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0216 16:55:33.750669 5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1051-gcp[0m
[0;37mDOCKER_VERSION[0m: [0;32m25.0.3[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0216 16:55:32.381558 5490 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0216 16:55:33.748682 5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0216 16:55:33.750669 5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0216 16:59:34.070110 67305 out.go:239] *
*
W0216 16:59:34.071023 67305 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0216 16:59:34.073878 67305 out.go:177] X Problems detected in kubelet:
I0216 16:59:34.075682 67305 out.go:177] Feb 16 16:59:01 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:01.531525 5718 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-988248_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
I0216 16:59:34.077536 67305 out.go:177] Feb 16 16:59:01 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:01.532621 5718 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-988248_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
I0216 16:59:34.079230 67305 out.go:177] Feb 16 16:59:07 ingress-addon-legacy-988248 kubelet[5718]: E0216 16:59:07.526485 5718 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-988248_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
I0216 16:59:34.082467 67305 out.go:177]
W0216 16:59:34.083898 67305 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1051-gcp[0m
[0;37mDOCKER_VERSION[0m: [0;32m25.0.3[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0216 16:55:32.381558 5490 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0216 16:55:33.748682 5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0216 16:55:33.750669 5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1051-gcp[0m
[0;37mDOCKER_VERSION[0m: [0;32m25.0.3[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0216 16:55:32.381558 5490 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0216 16:55:33.748682 5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0216 16:55:33.750669 5490 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0216 16:59:34.083955 67305 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0216 16:59:34.083985 67305 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0216 16:59:34.085850 67305 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-linux-amd64 start -p ingress-addon-legacy-988248 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker --container-runtime=docker" : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (518.58s)