=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-linux-amd64 start -p ingress-addon-legacy-838368 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker --container-runtime=docker
E0223 00:39:56.049518 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:40:37.010226 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:41:58.932705 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:43:35.082688 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:35.087985 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:35.098244 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:35.118484 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:35.158766 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:35.239108 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:35.399511 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:35.720115 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:36.361130 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:37.641400 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:40.203189 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:45.323748 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:43:55.564157 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:44:15.086929 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:44:16.044892 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:44:42.772989 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/addons-342517/client.crt: no such file or directory
E0223 00:44:57.005516 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
E0223 00:46:18.926225 324375 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/functional-511250/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ingress-addon-legacy-838368 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker --container-runtime=docker: exit status 109 (8m31.075513803s)
-- stdout --
* [ingress-addon-legacy-838368] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=18233
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting control plane node ingress-addon-legacy-838368 in cluster ingress-addon-legacy-838368
* Pulling base image v0.0.42-1708008208-17936 ...
* Downloading Kubernetes v1.18.20 preload ...
* Creating docker container (CPUs=2, Memory=4096MB) ...
* Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
X Problems detected in kubelet:
Feb 23 00:47:50 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:47:50.813296 5752 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-838368_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
Feb 23 00:47:52 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:47:52.812663 5752 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-838368_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
Feb 23 00:47:57 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:47:57.813070 5752 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-838368_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
-- /stdout --
** stderr **
I0223 00:39:49.286234 377758 out.go:291] Setting OutFile to fd 1 ...
I0223 00:39:49.286523 377758 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0223 00:39:49.286533 377758 out.go:304] Setting ErrFile to fd 2...
I0223 00:39:49.286537 377758 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0223 00:39:49.286763 377758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-317564/.minikube/bin
I0223 00:39:49.287410 377758 out.go:298] Setting JSON to false
I0223 00:39:49.288552 377758 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4938,"bootTime":1708643851,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0223 00:39:49.288621 377758 start.go:139] virtualization: kvm guest
I0223 00:39:49.290919 377758 out.go:177] * [ingress-addon-legacy-838368] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
I0223 00:39:49.292433 377758 notify.go:220] Checking for updates...
I0223 00:39:49.292464 377758 out.go:177] - MINIKUBE_LOCATION=18233
I0223 00:39:49.293942 377758 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0223 00:39:49.295295 377758 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/18233-317564/kubeconfig
I0223 00:39:49.296534 377758 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-317564/.minikube
I0223 00:39:49.297788 377758 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0223 00:39:49.299009 377758 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0223 00:39:49.300382 377758 driver.go:392] Setting default libvirt URI to qemu:///system
I0223 00:39:49.322464 377758 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
I0223 00:39:49.322647 377758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0223 00:39:49.372750 377758 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:48 SystemTime:2024-02-23 00:39:49.36352106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0223 00:39:49.372912 377758 docker.go:295] overlay module found
I0223 00:39:49.374890 377758 out.go:177] * Using the docker driver based on user configuration
I0223 00:39:49.376208 377758 start.go:299] selected driver: docker
I0223 00:39:49.376222 377758 start.go:903] validating driver "docker" against <nil>
I0223 00:39:49.376234 377758 start.go:914] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0223 00:39:49.377030 377758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0223 00:39:49.428367 377758 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:48 SystemTime:2024-02-23 00:39:49.420178372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1051-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0223 00:39:49.428544 377758 start_flags.go:309] no existing cluster config was found, will generate one from the flags
I0223 00:39:49.428753 377758 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0223 00:39:49.430289 377758 out.go:177] * Using Docker driver with root privileges
I0223 00:39:49.431632 377758 cni.go:84] Creating CNI manager for ""
I0223 00:39:49.431664 377758 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0223 00:39:49.431674 377758 start_flags.go:323] config:
{Name:ingress-addon-legacy-838368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-838368 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0223 00:39:49.433186 377758 out.go:177] * Starting control plane node ingress-addon-legacy-838368 in cluster ingress-addon-legacy-838368
I0223 00:39:49.434544 377758 cache.go:121] Beginning downloading kic base image for docker with docker
I0223 00:39:49.435753 377758 out.go:177] * Pulling base image v0.0.42-1708008208-17936 ...
I0223 00:39:49.436850 377758 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0223 00:39:49.436951 377758 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon
I0223 00:39:49.452305 377758 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf in local docker daemon, skipping pull
I0223 00:39:49.452328 377758 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf exists in daemon, skipping load
I0223 00:39:49.472648 377758 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0223 00:39:49.472678 377758 cache.go:56] Caching tarball of preloaded images
I0223 00:39:49.472813 377758 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0223 00:39:49.474579 377758 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0223 00:39:49.476055 377758 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0223 00:39:49.512440 377758 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0223 00:39:53.574149 377758 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0223 00:39:53.574260 377758 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0223 00:39:54.367431 377758 cache.go:59] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0223 00:39:54.367852 377758 profile.go:148] Saving config to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/config.json ...
I0223 00:39:54.367893 377758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/config.json: {Name:mk3673064d6872c19f71258f1deec8112e0ae3d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 00:39:54.368118 377758 cache.go:194] Successfully downloaded all kic artifacts
I0223 00:39:54.368148 377758 start.go:365] acquiring machines lock for ingress-addon-legacy-838368: {Name:mk83b6f61dd07162aa4ec11c4e638a0950891881 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0223 00:39:54.368211 377758 start.go:369] acquired machines lock for "ingress-addon-legacy-838368" in 45.497µs
I0223 00:39:54.368234 377758 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-838368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-838368 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0223 00:39:54.368341 377758 start.go:125] createHost starting for "" (driver="docker")
I0223 00:39:54.370711 377758 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
I0223 00:39:54.370979 377758 start.go:159] libmachine.API.Create for "ingress-addon-legacy-838368" (driver="docker")
I0223 00:39:54.371020 377758 client.go:168] LocalClient.Create starting
I0223 00:39:54.371097 377758 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem
I0223 00:39:54.371143 377758 main.go:141] libmachine: Decoding PEM data...
I0223 00:39:54.371175 377758 main.go:141] libmachine: Parsing certificate...
I0223 00:39:54.371246 377758 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem
I0223 00:39:54.371279 377758 main.go:141] libmachine: Decoding PEM data...
I0223 00:39:54.371298 377758 main.go:141] libmachine: Parsing certificate...
I0223 00:39:54.371674 377758 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-838368 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0223 00:39:54.387645 377758 cli_runner.go:211] docker network inspect ingress-addon-legacy-838368 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0223 00:39:54.387718 377758 network_create.go:281] running [docker network inspect ingress-addon-legacy-838368] to gather additional debugging logs...
I0223 00:39:54.387750 377758 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-838368
W0223 00:39:54.402418 377758 cli_runner.go:211] docker network inspect ingress-addon-legacy-838368 returned with exit code 1
I0223 00:39:54.402460 377758 network_create.go:284] error running [docker network inspect ingress-addon-legacy-838368]: docker network inspect ingress-addon-legacy-838368: exit status 1
stdout:
[]
stderr:
Error response from daemon: network ingress-addon-legacy-838368 not found
I0223 00:39:54.402477 377758 network_create.go:286] output of [docker network inspect ingress-addon-legacy-838368]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network ingress-addon-legacy-838368 not found
** /stderr **
I0223 00:39:54.402573 377758 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0223 00:39:54.417789 377758 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0025a3200}
I0223 00:39:54.417825 377758 network_create.go:124] attempt to create docker network ingress-addon-legacy-838368 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0223 00:39:54.417866 377758 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-838368 ingress-addon-legacy-838368
I0223 00:39:54.469358 377758 network_create.go:108] docker network ingress-addon-legacy-838368 192.168.49.0/24 created
I0223 00:39:54.469393 377758 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-838368" container
I0223 00:39:54.469459 377758 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0223 00:39:54.483915 377758 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-838368 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-838368 --label created_by.minikube.sigs.k8s.io=true
I0223 00:39:54.499488 377758 oci.go:103] Successfully created a docker volume ingress-addon-legacy-838368
I0223 00:39:54.499574 377758 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-838368-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-838368 --entrypoint /usr/bin/test -v ingress-addon-legacy-838368:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib
I0223 00:39:56.008915 377758 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-838368-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-838368 --entrypoint /usr/bin/test -v ingress-addon-legacy-838368:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -d /var/lib: (1.509294438s)
I0223 00:39:56.008947 377758 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-838368
I0223 00:39:56.008966 377758 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0223 00:39:56.008993 377758 kic.go:194] Starting extracting preloaded images to volume ...
I0223 00:39:56.009059 377758 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-838368:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir
I0223 00:40:01.061105 377758 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-838368:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf -I lz4 -xf /preloaded.tar -C /extractDir: (5.052003476s)
I0223 00:40:01.061148 377758 kic.go:203] duration metric: took 5.052151 seconds to extract preloaded images to volume
W0223 00:40:01.061312 377758 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0223 00:40:01.061442 377758 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0223 00:40:01.113152 377758 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-838368 --name ingress-addon-legacy-838368 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-838368 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-838368 --network ingress-addon-legacy-838368 --ip 192.168.49.2 --volume ingress-addon-legacy-838368:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf
I0223 00:40:01.403865 377758 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-838368 --format={{.State.Running}}
I0223 00:40:01.422210 377758 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-838368 --format={{.State.Status}}
I0223 00:40:01.441081 377758 cli_runner.go:164] Run: docker exec ingress-addon-legacy-838368 stat /var/lib/dpkg/alternatives/iptables
I0223 00:40:01.481363 377758 oci.go:144] the created container "ingress-addon-legacy-838368" has a running status.
I0223 00:40:01.481407 377758 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18233-317564/.minikube/machines/ingress-addon-legacy-838368/id_rsa...
I0223 00:40:01.637161 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/machines/ingress-addon-legacy-838368/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0223 00:40:01.637231 377758 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18233-317564/.minikube/machines/ingress-addon-legacy-838368/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0223 00:40:01.656288 377758 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-838368 --format={{.State.Status}}
I0223 00:40:01.674505 377758 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0223 00:40:01.674532 377758 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-838368 chown docker:docker /home/docker/.ssh/authorized_keys]
I0223 00:40:01.727668 377758 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-838368 --format={{.State.Status}}
I0223 00:40:01.743484 377758 machine.go:88] provisioning docker machine ...
I0223 00:40:01.743521 377758 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-838368"
I0223 00:40:01.743579 377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
I0223 00:40:01.759347 377758 main.go:141] libmachine: Using SSH client type: native
I0223 00:40:01.759616 377758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 127.0.0.1 33102 <nil> <nil>}
I0223 00:40:01.759637 377758 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-838368 && echo "ingress-addon-legacy-838368" | sudo tee /etc/hostname
I0223 00:40:01.760231 377758 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43126->127.0.0.1:33102: read: connection reset by peer
I0223 00:40:04.900481 377758 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-838368
I0223 00:40:04.900589 377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
I0223 00:40:04.916828 377758 main.go:141] libmachine: Using SSH client type: native
I0223 00:40:04.917064 377758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 127.0.0.1 33102 <nil> <nil>}
I0223 00:40:04.917092 377758 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-838368' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-838368/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-838368' | sudo tee -a /etc/hosts;
fi
fi
I0223 00:40:05.046178 377758 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0223 00:40:05.046216 377758 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18233-317564/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-317564/.minikube}
I0223 00:40:05.046254 377758 ubuntu.go:177] setting up certificates
I0223 00:40:05.046269 377758 provision.go:83] configureAuth start
I0223 00:40:05.046351 377758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-838368
I0223 00:40:05.063332 377758 provision.go:138] copyHostCerts
I0223 00:40:05.063371 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem
I0223 00:40:05.063403 377758 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem, removing ...
I0223 00:40:05.063412 377758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem
I0223 00:40:05.063480 377758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/cert.pem (1123 bytes)
I0223 00:40:05.063551 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem
I0223 00:40:05.063569 377758 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem, removing ...
I0223 00:40:05.063580 377758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem
I0223 00:40:05.063604 377758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/key.pem (1675 bytes)
I0223 00:40:05.063644 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem
I0223 00:40:05.063660 377758 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem, removing ...
I0223 00:40:05.063667 377758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem
I0223 00:40:05.063686 377758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-317564/.minikube/ca.pem (1078 bytes)
I0223 00:40:05.063743 377758 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-838368 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-838368]
I0223 00:40:05.121370 377758 provision.go:172] copyRemoteCerts
I0223 00:40:05.121440 377758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0223 00:40:05.121496 377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
I0223 00:40:05.137109 377758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/ingress-addon-legacy-838368/id_rsa Username:docker}
I0223 00:40:05.230984 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem -> /etc/docker/server.pem
I0223 00:40:05.231045 377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0223 00:40:05.252530 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0223 00:40:05.252648 377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0223 00:40:05.273506 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0223 00:40:05.273579 377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0223 00:40:05.294066 377758 provision.go:86] duration metric: configureAuth took 247.76753ms
I0223 00:40:05.294098 377758 ubuntu.go:193] setting minikube options for container-runtime
I0223 00:40:05.294253 377758 config.go:182] Loaded profile config "ingress-addon-legacy-838368": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0223 00:40:05.294300 377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
I0223 00:40:05.309896 377758 main.go:141] libmachine: Using SSH client type: native
I0223 00:40:05.310133 377758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 127.0.0.1 33102 <nil> <nil>}
I0223 00:40:05.310148 377758 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0223 00:40:05.438161 377758 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0223 00:40:05.438191 377758 ubuntu.go:71] root file system type: overlay
I0223 00:40:05.438290 377758 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0223 00:40:05.438349 377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
I0223 00:40:05.454564 377758 main.go:141] libmachine: Using SSH client type: native
I0223 00:40:05.454772 377758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 127.0.0.1 33102 <nil> <nil>}
I0223 00:40:05.454853 377758 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0223 00:40:05.593285 377758 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0223 00:40:05.593356 377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
I0223 00:40:05.609550 377758 main.go:141] libmachine: Using SSH client type: native
I0223 00:40:05.609807 377758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 127.0.0.1 33102 <nil> <nil>}
I0223 00:40:05.609835 377758 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0223 00:40:06.275259 377758 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2024-02-06 21:12:51.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2024-02-23 00:40:05.588690843 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0223 00:40:06.275288 377758 machine.go:91] provisioned docker machine in 4.531782245s
I0223 00:40:06.275299 377758 client.go:171] LocalClient.Create took 11.904267527s
I0223 00:40:06.275318 377758 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-838368" took 11.904340434s
I0223 00:40:06.275328 377758 start.go:300] post-start starting for "ingress-addon-legacy-838368" (driver="docker")
I0223 00:40:06.275339 377758 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0223 00:40:06.275403 377758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0223 00:40:06.275445 377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
I0223 00:40:06.291972 377758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/ingress-addon-legacy-838368/id_rsa Username:docker}
I0223 00:40:06.387256 377758 ssh_runner.go:195] Run: cat /etc/os-release
I0223 00:40:06.390284 377758 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0223 00:40:06.390314 377758 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0223 00:40:06.390322 377758 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0223 00:40:06.390329 377758 info.go:137] Remote host: Ubuntu 22.04.3 LTS
I0223 00:40:06.390354 377758 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-317564/.minikube/addons for local assets ...
I0223 00:40:06.390407 377758 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-317564/.minikube/files for local assets ...
I0223 00:40:06.390476 377758 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem -> 3243752.pem in /etc/ssl/certs
I0223 00:40:06.390489 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem -> /etc/ssl/certs/3243752.pem
I0223 00:40:06.390563 377758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0223 00:40:06.397990 377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem --> /etc/ssl/certs/3243752.pem (1708 bytes)
I0223 00:40:06.418759 377758 start.go:303] post-start completed in 143.418294ms
I0223 00:40:06.419075 377758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-838368
I0223 00:40:06.435524 377758 profile.go:148] Saving config to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/config.json ...
I0223 00:40:06.435743 377758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0223 00:40:06.435782 377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
I0223 00:40:06.452817 377758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/ingress-addon-legacy-838368/id_rsa Username:docker}
I0223 00:40:06.542756 377758 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0223 00:40:06.546784 377758 start.go:128] duration metric: createHost completed in 12.178427063s
I0223 00:40:06.546814 377758 start.go:83] releasing machines lock for "ingress-addon-legacy-838368", held for 12.17859169s
I0223 00:40:06.546890 377758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-838368
I0223 00:40:06.562586 377758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0223 00:40:06.562610 377758 ssh_runner.go:195] Run: cat /version.json
I0223 00:40:06.562674 377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
I0223 00:40:06.562675 377758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-838368
I0223 00:40:06.578493 377758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/ingress-addon-legacy-838368/id_rsa Username:docker}
I0223 00:40:06.579466 377758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33102 SSHKeyPath:/home/jenkins/minikube-integration/18233-317564/.minikube/machines/ingress-addon-legacy-838368/id_rsa Username:docker}
I0223 00:40:06.665694 377758 ssh_runner.go:195] Run: systemctl --version
I0223 00:40:06.752850 377758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0223 00:40:06.757781 377758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0223 00:40:06.780625 377758 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0223 00:40:06.780723 377758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0223 00:40:06.796237 377758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0223 00:40:06.810879 377758 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0223 00:40:06.810914 377758 start.go:475] detecting cgroup driver to use...
I0223 00:40:06.810951 377758 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0223 00:40:06.811094 377758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0223 00:40:06.825040 377758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0223 00:40:06.833534 377758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0223 00:40:06.841812 377758 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0223 00:40:06.841877 377758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0223 00:40:06.850328 377758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 00:40:06.859007 377758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0223 00:40:06.867580 377758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0223 00:40:06.875909 377758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0223 00:40:06.883732 377758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0223 00:40:06.892194 377758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0223 00:40:06.899384 377758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0223 00:40:06.906432 377758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 00:40:06.975196 377758 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0223 00:40:07.060459 377758 start.go:475] detecting cgroup driver to use...
I0223 00:40:07.060514 377758 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0223 00:40:07.060575 377758 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0223 00:40:07.072293 377758 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0223 00:40:07.072377 377758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0223 00:40:07.083900 377758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0223 00:40:07.099495 377758 ssh_runner.go:195] Run: which cri-dockerd
I0223 00:40:07.102727 377758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0223 00:40:07.111975 377758 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0223 00:40:07.129332 377758 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0223 00:40:07.213617 377758 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0223 00:40:07.312203 377758 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0223 00:40:07.312367 377758 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0223 00:40:07.328280 377758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 00:40:07.403680 377758 ssh_runner.go:195] Run: sudo systemctl restart docker
I0223 00:40:07.634498 377758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0223 00:40:07.656743 377758 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0223 00:40:07.681832 377758 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 25.0.3 ...
I0223 00:40:07.681955 377758 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-838368 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0223 00:40:07.698240 377758 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0223 00:40:07.701817 377758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0223 00:40:07.711712 377758 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0223 00:40:07.711761 377758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 00:40:07.728889 377758 docker.go:685] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0223 00:40:07.728915 377758 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0223 00:40:07.728969 377758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0223 00:40:07.736972 377758 ssh_runner.go:195] Run: which lz4
I0223 00:40:07.739957 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0223 00:40:07.740035 377758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0223 00:40:07.742967 377758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0223 00:40:07.742999 377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
I0223 00:40:08.536216 377758 docker.go:649] Took 0.796199 seconds to copy over tarball
I0223 00:40:08.536289 377758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0223 00:40:10.521024 377758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.984691525s)
I0223 00:40:10.521059 377758 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0223 00:40:10.582742 377758 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0223 00:40:10.590580 377758 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
I0223 00:40:10.606534 377758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0223 00:40:10.684602 377758 ssh_runner.go:195] Run: sudo systemctl restart docker
I0223 00:40:13.285754 377758 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.60111495s)
I0223 00:40:13.285840 377758 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0223 00:40:13.304588 377758 docker.go:685] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0223 00:40:13.304612 377758 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0223 00:40:13.304625 377758 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
I0223 00:40:13.306030 377758 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
I0223 00:40:13.306171 377758 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
I0223 00:40:13.306175 377758 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0223 00:40:13.306192 377758 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
I0223 00:40:13.306213 377758 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
I0223 00:40:13.306240 377758 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
I0223 00:40:13.306275 377758 image.go:134] retrieving image: registry.k8s.io/pause:3.2
I0223 00:40:13.306036 377758 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I0223 00:40:13.307074 377758 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
I0223 00:40:13.307176 377758 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
I0223 00:40:13.307235 377758 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
I0223 00:40:13.307254 377758 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
I0223 00:40:13.307278 377758 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I0223 00:40:13.307365 377758 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
I0223 00:40:13.307371 377758 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
I0223 00:40:13.307397 377758 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0223 00:40:13.486399 377758 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
I0223 00:40:13.486399 377758 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
I0223 00:40:13.490042 377758 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
I0223 00:40:13.504805 377758 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
I0223 00:40:13.506265 377758 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
I0223 00:40:13.506317 377758 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
I0223 00:40:13.506359 377758 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
I0223 00:40:13.506366 377758 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
I0223 00:40:13.506402 377758 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
I0223 00:40:13.506408 377758 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
I0223 00:40:13.507586 377758 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
I0223 00:40:13.507735 377758 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
I0223 00:40:13.507777 377758 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
I0223 00:40:13.507822 377758 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
I0223 00:40:13.521410 377758 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
I0223 00:40:13.523140 377758 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0223 00:40:13.525064 377758 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0223 00:40:13.525124 377758 docker.go:337] Removing image: registry.k8s.io/pause:3.2
I0223 00:40:13.525161 377758 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
I0223 00:40:13.531860 377758 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
I0223 00:40:13.574371 377758 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
I0223 00:40:13.574428 377758 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
I0223 00:40:13.574485 377758 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
I0223 00:40:13.574535 377758 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
I0223 00:40:13.574548 377758 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
I0223 00:40:13.574586 377758 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
I0223 00:40:13.587667 377758 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
I0223 00:40:13.587709 377758 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
I0223 00:40:13.587754 377758 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
I0223 00:40:13.587957 377758 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
I0223 00:40:13.590147 377758 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
I0223 00:40:13.590190 377758 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
I0223 00:40:13.590231 377758 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
I0223 00:40:13.595440 377758 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
I0223 00:40:13.606338 377758 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I0223 00:40:13.608550 377758 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
I0223 00:40:13.608595 377758 cache_images.go:92] LoadImages completed in 303.957131ms
W0223 00:40:13.608652 377758 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-317564/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
I0223 00:40:13.608696 377758 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0223 00:40:13.684401 377758 cni.go:84] Creating CNI manager for ""
I0223 00:40:13.684433 377758 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0223 00:40:13.684455 377758 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0223 00:40:13.684474 377758 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-838368 NodeName:ingress-addon-legacy-838368 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0223 00:40:13.684635 377758 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-838368"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0223 00:40:13.684706 377758 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-838368 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-838368 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0223 00:40:13.684761 377758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0223 00:40:13.693307 377758 binaries.go:44] Found k8s binaries, skipping transfer
I0223 00:40:13.693383 377758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0223 00:40:13.700897 377758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0223 00:40:13.716258 377758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0223 00:40:13.731277 377758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0223 00:40:13.747103 377758 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0223 00:40:13.750096 377758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0223 00:40:13.759456 377758 certs.go:56] Setting up /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368 for IP: 192.168.49.2
I0223 00:40:13.759490 377758 certs.go:190] acquiring lock for shared ca certs: {Name:mk61b7180586719fd962a2bfdb44a8ad933bd3aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 00:40:13.759646 377758 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.key
I0223 00:40:13.759694 377758 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.key
I0223 00:40:13.759761 377758 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/client.key
I0223 00:40:13.759780 377758 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/client.crt with IP's: []
I0223 00:40:13.922518 377758 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/client.crt ...
I0223 00:40:13.922549 377758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/client.crt: {Name:mk682c96244c8a17d35edaa6656fea4a9ab28eae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 00:40:13.922711 377758 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/client.key ...
I0223 00:40:13.922727 377758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/client.key: {Name:mk8d5c45a56877445bbdb572f958752e97fbd28e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 00:40:13.922809 377758 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.key.dd3b5fb2
I0223 00:40:13.922824 377758 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0223 00:40:14.011628 377758 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.crt.dd3b5fb2 ...
I0223 00:40:14.011661 377758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.crt.dd3b5fb2: {Name:mkc128ee5384e47c582a935515d634944f05717f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 00:40:14.011817 377758 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.key.dd3b5fb2 ...
I0223 00:40:14.011831 377758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.key.dd3b5fb2: {Name:mk7a17ed5cbd1d362645e189717eddd537e35aa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 00:40:14.011939 377758 certs.go:337] copying /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.crt
I0223 00:40:14.012032 377758 certs.go:341] copying /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.key
I0223 00:40:14.012093 377758 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.key
I0223 00:40:14.012105 377758 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.crt with IP's: []
I0223 00:40:14.224951 377758 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.crt ...
I0223 00:40:14.224983 377758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.crt: {Name:mk3df2d6b96d64b2e4eed6dc41ec21f03c3fd6dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 00:40:14.225150 377758 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.key ...
I0223 00:40:14.225164 377758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.key: {Name:mkfeec4b6e0c772d5957cf898551efc304a32f2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0223 00:40:14.225228 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0223 00:40:14.225246 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0223 00:40:14.225259 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0223 00:40:14.225271 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0223 00:40:14.225281 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0223 00:40:14.225291 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0223 00:40:14.225304 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0223 00:40:14.225316 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0223 00:40:14.225371 377758 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375.pem (1338 bytes)
W0223 00:40:14.225410 377758 certs.go:433] ignoring /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375_empty.pem, impossibly tiny 0 bytes
I0223 00:40:14.225420 377758 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca-key.pem (1679 bytes)
I0223 00:40:14.225444 377758 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/ca.pem (1078 bytes)
I0223 00:40:14.225466 377758 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/cert.pem (1123 bytes)
I0223 00:40:14.225486 377758 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/home/jenkins/minikube-integration/18233-317564/.minikube/certs/key.pem (1675 bytes)
I0223 00:40:14.225527 377758 certs.go:437] found cert: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem (1708 bytes)
I0223 00:40:14.225559 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375.pem -> /usr/share/ca-certificates/324375.pem
I0223 00:40:14.225572 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem -> /usr/share/ca-certificates/3243752.pem
I0223 00:40:14.225584 377758 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0223 00:40:14.226259 377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0223 00:40:14.248993 377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0223 00:40:14.270172 377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0223 00:40:14.291901 377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/profiles/ingress-addon-legacy-838368/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0223 00:40:14.313264 377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0223 00:40:14.334908 377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0223 00:40:14.356302 377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0223 00:40:14.377543 377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0223 00:40:14.398935 377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/certs/324375.pem --> /usr/share/ca-certificates/324375.pem (1338 bytes)
I0223 00:40:14.420106 377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/files/etc/ssl/certs/3243752.pem --> /usr/share/ca-certificates/3243752.pem (1708 bytes)
I0223 00:40:14.441205 377758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-317564/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0223 00:40:14.461748 377758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0223 00:40:14.476964 377758 ssh_runner.go:195] Run: openssl version
I0223 00:40:14.481806 377758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3243752.pem && ln -fs /usr/share/ca-certificates/3243752.pem /etc/ssl/certs/3243752.pem"
I0223 00:40:14.489779 377758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3243752.pem
I0223 00:40:14.492773 377758 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 23 00:36 /usr/share/ca-certificates/3243752.pem
I0223 00:40:14.492832 377758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3243752.pem
I0223 00:40:14.498825 377758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3243752.pem /etc/ssl/certs/3ec20f2e.0"
I0223 00:40:14.507398 377758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0223 00:40:14.516324 377758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0223 00:40:14.519493 377758 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 23 00:32 /usr/share/ca-certificates/minikubeCA.pem
I0223 00:40:14.519543 377758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0223 00:40:14.526006 377758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0223 00:40:14.534234 377758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/324375.pem && ln -fs /usr/share/ca-certificates/324375.pem /etc/ssl/certs/324375.pem"
I0223 00:40:14.542379 377758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/324375.pem
I0223 00:40:14.545361 377758 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 23 00:36 /usr/share/ca-certificates/324375.pem
I0223 00:40:14.545424 377758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/324375.pem
I0223 00:40:14.551547 377758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/324375.pem /etc/ssl/certs/51391683.0"
I0223 00:40:14.559815 377758 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0223 00:40:14.562828 377758 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0223 00:40:14.562886 377758 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-838368 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708008208-17936@sha256:4ea1136332ba1476cda33a97bf12e2f96995cc120674fbafd3ade22d1118ecdf Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-838368 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0223 00:40:14.563037 377758 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0223 00:40:14.579431 377758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0223 00:40:14.587724 377758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0223 00:40:14.596024 377758 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0223 00:40:14.596075 377758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0223 00:40:14.604301 377758 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0223 00:40:14.604365 377758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0223 00:40:14.647959 377758 kubeadm.go:322] W0223 00:40:14.647464 1834 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0223 00:40:14.761393 377758 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0223 00:40:14.810896 377758 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
I0223 00:40:14.811187 377758 kubeadm.go:322] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
I0223 00:40:14.877714 377758 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0223 00:40:17.626037 377758 kubeadm.go:322] W0223 00:40:17.625718 1834 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0223 00:40:17.627060 377758 kubeadm.go:322] W0223 00:40:17.626730 1834 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0223 00:44:17.631768 377758 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0223 00:44:17.631893 377758 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0223 00:44:17.634579 377758 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0223 00:44:17.634632 377758 kubeadm.go:322] [preflight] Running pre-flight checks
I0223 00:44:17.634712 377758 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
I0223 00:44:17.634768 377758 kubeadm.go:322] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1051-gcp[0m
I0223 00:44:17.634815 377758 kubeadm.go:322] [0;37mDOCKER_VERSION[0m: [0;32m25.0.3[0m
I0223 00:44:17.634852 377758 kubeadm.go:322] [0;37mOS[0m: [0;32mLinux[0m
I0223 00:44:17.634896 377758 kubeadm.go:322] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0223 00:44:17.634938 377758 kubeadm.go:322] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0223 00:44:17.634979 377758 kubeadm.go:322] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0223 00:44:17.635026 377758 kubeadm.go:322] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0223 00:44:17.635070 377758 kubeadm.go:322] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0223 00:44:17.635169 377758 kubeadm.go:322] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0223 00:44:17.635282 377758 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0223 00:44:17.635429 377758 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0223 00:44:17.635573 377758 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0223 00:44:17.635672 377758 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0223 00:44:17.635746 377758 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0223 00:44:17.635782 377758 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0223 00:44:17.635834 377758 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0223 00:44:17.637801 377758 out.go:204] - Generating certificates and keys ...
I0223 00:44:17.637891 377758 kubeadm.go:322] [certs] Using existing ca certificate authority
I0223 00:44:17.637947 377758 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0223 00:44:17.638019 377758 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0223 00:44:17.638109 377758 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0223 00:44:17.638192 377758 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0223 00:44:17.638273 377758 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0223 00:44:17.638327 377758 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0223 00:44:17.638443 377758 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-838368 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0223 00:44:17.638499 377758 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0223 00:44:17.638610 377758 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-838368 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I0223 00:44:17.638685 377758 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0223 00:44:17.638771 377758 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0223 00:44:17.638811 377758 kubeadm.go:322] [certs] Generating "sa" key and public key
I0223 00:44:17.638866 377758 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0223 00:44:17.638914 377758 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0223 00:44:17.638970 377758 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0223 00:44:17.639066 377758 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0223 00:44:17.639121 377758 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0223 00:44:17.639196 377758 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0223 00:44:17.641678 377758 out.go:204] - Booting up control plane ...
I0223 00:44:17.641767 377758 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0223 00:44:17.641849 377758 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0223 00:44:17.641926 377758 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0223 00:44:17.642061 377758 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0223 00:44:17.642263 377758 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0223 00:44:17.642343 377758 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0223 00:44:17.642355 377758 kubeadm.go:322]
I0223 00:44:17.642389 377758 kubeadm.go:322] Unfortunately, an error has occurred:
I0223 00:44:17.642428 377758 kubeadm.go:322] timed out waiting for the condition
I0223 00:44:17.642434 377758 kubeadm.go:322]
I0223 00:44:17.642470 377758 kubeadm.go:322] This error is likely caused by:
I0223 00:44:17.642500 377758 kubeadm.go:322] - The kubelet is not running
I0223 00:44:17.642596 377758 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0223 00:44:17.642612 377758 kubeadm.go:322]
I0223 00:44:17.642702 377758 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0223 00:44:17.642736 377758 kubeadm.go:322] - 'systemctl status kubelet'
I0223 00:44:17.642764 377758 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0223 00:44:17.642770 377758 kubeadm.go:322]
I0223 00:44:17.642857 377758 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0223 00:44:17.642932 377758 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0223 00:44:17.642944 377758 kubeadm.go:322]
I0223 00:44:17.643041 377758 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0223 00:44:17.643092 377758 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0223 00:44:17.643161 377758 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0223 00:44:17.643191 377758 kubeadm.go:322] - 'docker logs CONTAINERID'
I0223 00:44:17.643211 377758 kubeadm.go:322]
W0223 00:44:17.643419 377758 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1051-gcp[0m
[0;37mDOCKER_VERSION[0m: [0;32m25.0.3[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-838368 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-838368 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 00:40:14.647464 1834 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 00:40:17.625718 1834 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 00:40:17.626730 1834 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1051-gcp[0m
[0;37mDOCKER_VERSION[0m: [0;32m25.0.3[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-838368 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-838368 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 00:40:14.647464 1834 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 00:40:17.625718 1834 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 00:40:17.626730 1834 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0223 00:44:17.643515 377758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0223 00:44:18.371475 377758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0223 00:44:18.381850 377758 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0223 00:44:18.381910 377758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0223 00:44:18.389415 377758 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0223 00:44:18.389458 377758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0223 00:44:18.431260 377758 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0223 00:44:18.431344 377758 kubeadm.go:322] [preflight] Running pre-flight checks
I0223 00:44:18.593850 377758 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
I0223 00:44:18.593957 377758 kubeadm.go:322] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1051-gcp[0m
I0223 00:44:18.594043 377758 kubeadm.go:322] [0;37mDOCKER_VERSION[0m: [0;32m25.0.3[0m
I0223 00:44:18.594140 377758 kubeadm.go:322] [0;37mOS[0m: [0;32mLinux[0m
I0223 00:44:18.594245 377758 kubeadm.go:322] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0223 00:44:18.594325 377758 kubeadm.go:322] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0223 00:44:18.594401 377758 kubeadm.go:322] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0223 00:44:18.594479 377758 kubeadm.go:322] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0223 00:44:18.594562 377758 kubeadm.go:322] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0223 00:44:18.594643 377758 kubeadm.go:322] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0223 00:44:18.660997 377758 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0223 00:44:18.661130 377758 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0223 00:44:18.661237 377758 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0223 00:44:18.826855 377758 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0223 00:44:18.827733 377758 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0223 00:44:18.827804 377758 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0223 00:44:18.911167 377758 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0223 00:44:18.914791 377758 out.go:204] - Generating certificates and keys ...
I0223 00:44:18.914898 377758 kubeadm.go:322] [certs] Using existing ca certificate authority
I0223 00:44:18.914982 377758 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0223 00:44:18.915092 377758 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0223 00:44:18.915183 377758 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0223 00:44:18.915275 377758 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0223 00:44:18.915364 377758 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0223 00:44:18.915482 377758 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0223 00:44:18.915584 377758 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0223 00:44:18.915710 377758 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0223 00:44:18.915816 377758 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0223 00:44:18.915883 377758 kubeadm.go:322] [certs] Using the existing "sa" key
I0223 00:44:18.915964 377758 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0223 00:44:19.088516 377758 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0223 00:44:19.171278 377758 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0223 00:44:19.325843 377758 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0223 00:44:19.938889 377758 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0223 00:44:19.939517 377758 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0223 00:44:19.941564 377758 out.go:204] - Booting up control plane ...
I0223 00:44:19.941655 377758 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0223 00:44:19.945700 377758 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0223 00:44:19.946756 377758 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0223 00:44:19.947323 377758 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0223 00:44:19.950391 377758 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0223 00:44:59.951111 377758 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0223 00:48:19.972138 377758 kubeadm.go:322]
I0223 00:48:19.972248 377758 kubeadm.go:322] Unfortunately, an error has occurred:
I0223 00:48:19.972303 377758 kubeadm.go:322] timed out waiting for the condition
I0223 00:48:19.972313 377758 kubeadm.go:322]
I0223 00:48:19.972362 377758 kubeadm.go:322] This error is likely caused by:
I0223 00:48:19.972411 377758 kubeadm.go:322] - The kubelet is not running
I0223 00:48:19.972534 377758 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0223 00:48:19.972546 377758 kubeadm.go:322]
I0223 00:48:19.972664 377758 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0223 00:48:19.972713 377758 kubeadm.go:322] - 'systemctl status kubelet'
I0223 00:48:19.972761 377758 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0223 00:48:19.972770 377758 kubeadm.go:322]
I0223 00:48:19.972881 377758 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0223 00:48:19.972992 377758 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0223 00:48:19.973002 377758 kubeadm.go:322]
I0223 00:48:19.973104 377758 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0223 00:48:19.973176 377758 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0223 00:48:19.973277 377758 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0223 00:48:19.973324 377758 kubeadm.go:322] - 'docker logs CONTAINERID'
I0223 00:48:19.973336 377758 kubeadm.go:322]
I0223 00:48:19.975369 377758 kubeadm.go:322] W0223 00:44:18.430772 5532 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0223 00:48:19.975574 377758 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0223 00:48:19.975715 377758 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
I0223 00:48:19.975923 377758 kubeadm.go:322] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
I0223 00:48:19.976046 377758 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0223 00:48:19.976188 377758 kubeadm.go:322] W0223 00:44:19.945491 5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0223 00:48:19.976337 377758 kubeadm.go:322] W0223 00:44:19.946527 5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0223 00:48:19.976477 377758 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0223 00:48:19.976572 377758 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0223 00:48:19.976710 377758 kubeadm.go:406] StartCluster complete in 8m5.413835988s
I0223 00:48:19.976868 377758 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0223 00:48:19.994132 377758 logs.go:276] 0 containers: []
W0223 00:48:19.994156 377758 logs.go:278] No container was found matching "kube-apiserver"
I0223 00:48:19.994219 377758 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0223 00:48:20.009930 377758 logs.go:276] 0 containers: []
W0223 00:48:20.009962 377758 logs.go:278] No container was found matching "etcd"
I0223 00:48:20.010015 377758 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0223 00:48:20.026342 377758 logs.go:276] 0 containers: []
W0223 00:48:20.026372 377758 logs.go:278] No container was found matching "coredns"
I0223 00:48:20.026428 377758 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0223 00:48:20.042848 377758 logs.go:276] 0 containers: []
W0223 00:48:20.042882 377758 logs.go:278] No container was found matching "kube-scheduler"
I0223 00:48:20.042934 377758 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0223 00:48:20.060794 377758 logs.go:276] 0 containers: []
W0223 00:48:20.060824 377758 logs.go:278] No container was found matching "kube-proxy"
I0223 00:48:20.060872 377758 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0223 00:48:20.076670 377758 logs.go:276] 0 containers: []
W0223 00:48:20.076699 377758 logs.go:278] No container was found matching "kube-controller-manager"
I0223 00:48:20.076747 377758 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0223 00:48:20.092844 377758 logs.go:276] 0 containers: []
W0223 00:48:20.092870 377758 logs.go:278] No container was found matching "kindnet"
I0223 00:48:20.092887 377758 logs.go:123] Gathering logs for kubelet ...
I0223 00:48:20.092903 377758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0223 00:48:20.114509 377758 logs.go:138] Found kubelet problem: Feb 23 00:47:50 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:47:50.813296 5752 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-838368_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
W0223 00:48:20.116450 377758 logs.go:138] Found kubelet problem: Feb 23 00:47:52 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:47:52.812663 5752 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-838368_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
W0223 00:48:20.122035 377758 logs.go:138] Found kubelet problem: Feb 23 00:47:57 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:47:57.813070 5752 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-838368_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
W0223 00:48:20.125134 377758 logs.go:138] Found kubelet problem: Feb 23 00:48:00 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:48:00.812549 5752 pod_workers.go:191] Error syncing pod 68d95bb8149ed8a5ab727bf63000f885 ("etcd-ingress-addon-legacy-838368_kube-system(68d95bb8149ed8a5ab727bf63000f885)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
W0223 00:48:20.128498 377758 logs.go:138] Found kubelet problem: Feb 23 00:48:03 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:48:03.813330 5752 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-838368_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
W0223 00:48:20.130596 377758 logs.go:138] Found kubelet problem: Feb 23 00:48:05 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:48:05.812868 5752 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-838368_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
W0223 00:48:20.136687 377758 logs.go:138] Found kubelet problem: Feb 23 00:48:12 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:48:12.814975 5752 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-838368_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
W0223 00:48:20.137148 377758 logs.go:138] Found kubelet problem: Feb 23 00:48:12 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:48:12.816105 5752 pod_workers.go:191] Error syncing pod 68d95bb8149ed8a5ab727bf63000f885 ("etcd-ingress-addon-legacy-838368_kube-system(68d95bb8149ed8a5ab727bf63000f885)"), skipping: failed to "StartContainer" for "etcd" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/etcd:3.4.3-0\": Id or size of image \"k8s.gcr.io/etcd:3.4.3-0\" is not set"
W0223 00:48:20.140578 377758 logs.go:138] Found kubelet problem: Feb 23 00:48:16 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:48:16.812804 5752 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-838368_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
W0223 00:48:20.142822 377758 logs.go:138] Found kubelet problem: Feb 23 00:48:18 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:48:18.814260 5752 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-838368_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
I0223 00:48:20.144011 377758 logs.go:123] Gathering logs for dmesg ...
I0223 00:48:20.144034 377758 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0223 00:48:20.173274 377758 logs.go:123] Gathering logs for describe nodes ...
I0223 00:48:20.173312 377758 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0223 00:48:20.231093 377758 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0223 00:48:20.231118 377758 logs.go:123] Gathering logs for Docker ...
I0223 00:48:20.231130 377758 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0223 00:48:20.249579 377758 logs.go:123] Gathering logs for container status ...
I0223 00:48:20.249614 377758 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0223 00:48:20.285976 377758 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1051-gcp[0m
[0;37mDOCKER_VERSION[0m: [0;32m25.0.3[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 00:44:18.430772 5532 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 00:44:19.945491 5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 00:44:19.946527 5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0223 00:48:20.286029 377758 out.go:239] *
*
W0223 00:48:20.286120 377758 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1051-gcp[0m
[0;37mDOCKER_VERSION[0m: [0;32m25.0.3[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 00:44:18.430772 5532 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 00:44:19.945491 5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 00:44:19.946527 5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1051-gcp[0m
[0;37mDOCKER_VERSION[0m: [0;32m25.0.3[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 00:44:18.430772 5532 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 00:44:19.945491 5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 00:44:19.946527 5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0223 00:48:20.286146 377758 out.go:239] *
*
W0223 00:48:20.287449 377758 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0223 00:48:20.290118 377758 out.go:177] X Problems detected in kubelet:
I0223 00:48:20.291689 377758 out.go:177] Feb 23 00:47:50 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:47:50.813296 5752 pod_workers.go:191] Error syncing pod 78b40af95c64e5112ac985f00b18628c ("kube-apiserver-ingress-addon-legacy-838368_kube-system(78b40af95c64e5112ac985f00b18628c)"), skipping: failed to "StartContainer" for "kube-apiserver" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-apiserver:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-apiserver:v1.18.20\" is not set"
I0223 00:48:20.293525 377758 out.go:177] Feb 23 00:47:52 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:47:52.812663 5752 pod_workers.go:191] Error syncing pod d12e497b0008e22acbcd5a9cf2dd48ac ("kube-scheduler-ingress-addon-legacy-838368_kube-system(d12e497b0008e22acbcd5a9cf2dd48ac)"), skipping: failed to "StartContainer" for "kube-scheduler" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-scheduler:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-scheduler:v1.18.20\" is not set"
I0223 00:48:20.295572 377758 out.go:177] Feb 23 00:47:57 ingress-addon-legacy-838368 kubelet[5752]: E0223 00:47:57.813070 5752 pod_workers.go:191] Error syncing pod 49b043cd68fd30a453bdf128db5271f3 ("kube-controller-manager-ingress-addon-legacy-838368_kube-system(49b043cd68fd30a453bdf128db5271f3)"), skipping: failed to "StartContainer" for "kube-controller-manager" with ImageInspectError: "Failed to inspect image \"k8s.gcr.io/kube-controller-manager:v1.18.20\": Id or size of image \"k8s.gcr.io/kube-controller-manager:v1.18.20\" is not set"
I0223 00:48:20.298693 377758 out.go:177]
W0223 00:48:20.300097 377758 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1051-gcp[0m
[0;37mDOCKER_VERSION[0m: [0;32m25.0.3[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 00:44:18.430772 5532 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 00:44:19.945491 5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 00:44:19.946527 5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1051-gcp[0m
[0;37mDOCKER_VERSION[0m: [0;32m25.0.3[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0223 00:44:18.430772 5532 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1051-gcp\n", err: exit status 1
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0223 00:44:19.945491 5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0223 00:44:19.946527 5532 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0223 00:48:20.300154 377758 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0223 00:48:20.300171 377758 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0223 00:48:20.302020 377758 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-linux-amd64 start -p ingress-addon-legacy-838368 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker --container-runtime=docker" : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (511.13s)