=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-linux-amd64 start -p ingress-addon-legacy-270792 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2
E0229 00:56:54.539426 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 00:59:10.694957 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 00:59:38.379666 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/addons-391247/client.crt: no such file or directory
E0229 00:59:57.865529 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 00:59:57.870929 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 00:59:57.881192 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 00:59:57.901487 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 00:59:57.941744 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 00:59:58.022090 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 00:59:58.182569 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 00:59:58.503203 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 00:59:59.144315 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:00:00.425150 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:00:02.987021 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:00:08.107439 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:00:18.347894 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:00:38.829093 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:01:19.790201 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
E0229 01:02:41.711324 122595 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/functional-181199/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ingress-addon-legacy-270792 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : exit status 109 (6m41.304958948s)
-- stdout --
* [ingress-addon-legacy-270792] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=18063
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on user configuration
* Starting control plane node ingress-addon-legacy-270792 in cluster ingress-addon-legacy-270792
* Downloading Kubernetes v1.18.20 preload ...
* Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
X Problems detected in kubelet:
Feb 29 01:03:21 ingress-addon-legacy-270792 kubelet[51519]: F0229 01:03:21.454607 51519 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
Feb 29 01:03:22 ingress-addon-legacy-270792 kubelet[51702]: F0229 01:03:22.668129 51702 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
Feb 29 01:03:23 ingress-addon-legacy-270792 kubelet[51880]: F0229 01:03:23.955336 51880 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
-- /stdout --
** stderr **
I0229 00:56:48.006560 131854 out.go:291] Setting OutFile to fd 1 ...
I0229 00:56:48.006832 131854 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 00:56:48.006843 131854 out.go:304] Setting ErrFile to fd 2...
I0229 00:56:48.006848 131854 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 00:56:48.007068 131854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-115328/.minikube/bin
I0229 00:56:48.007657 131854 out.go:298] Setting JSON to false
I0229 00:56:48.008588 131854 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2359,"bootTime":1709165849,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0229 00:56:48.008655 131854 start.go:139] virtualization: kvm guest
I0229 00:56:48.011148 131854 out.go:177] * [ingress-addon-legacy-270792] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
I0229 00:56:48.012546 131854 notify.go:220] Checking for updates...
I0229 00:56:48.014279 131854 out.go:177] - MINIKUBE_LOCATION=18063
I0229 00:56:48.015648 131854 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0229 00:56:48.016994 131854 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/18063-115328/kubeconfig
I0229 00:56:48.018255 131854 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-115328/.minikube
I0229 00:56:48.019564 131854 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0229 00:56:48.020864 131854 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0229 00:56:48.022268 131854 driver.go:392] Setting default libvirt URI to qemu:///system
I0229 00:56:48.058268 131854 out.go:177] * Using the kvm2 driver based on user configuration
I0229 00:56:48.059381 131854 start.go:299] selected driver: kvm2
I0229 00:56:48.059394 131854 start.go:903] validating driver "kvm2" against <nil>
I0229 00:56:48.059405 131854 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0229 00:56:48.060172 131854 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0229 00:56:48.060247 131854 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-115328/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0229 00:56:48.074948 131854 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
I0229 00:56:48.075025 131854 start_flags.go:309] no existing cluster config was found, will generate one from the flags
I0229 00:56:48.075272 131854 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0229 00:56:48.075359 131854 cni.go:84] Creating CNI manager for ""
I0229 00:56:48.075387 131854 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0229 00:56:48.075401 131854 start_flags.go:323] config:
{Name:ingress-addon-legacy-270792 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-270792 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0229 00:56:48.075576 131854 iso.go:125] acquiring lock: {Name:mka80d573fa8b54775426ef2857d894d76900941 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0229 00:56:48.078002 131854 out.go:177] * Starting control plane node ingress-addon-legacy-270792 in cluster ingress-addon-legacy-270792
I0229 00:56:48.079309 131854 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0229 00:56:48.104388 131854 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0229 00:56:48.104417 131854 cache.go:56] Caching tarball of preloaded images
I0229 00:56:48.104553 131854 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0229 00:56:48.106397 131854 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0229 00:56:48.108342 131854 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0229 00:56:48.133418 131854 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0229 00:56:52.478052 131854 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0229 00:56:52.478150 131854 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0229 00:56:53.261650 131854 cache.go:59] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0229 00:56:53.262023 131854 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/config.json ...
I0229 00:56:53.262052 131854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/config.json: {Name:mk2e02e5999fc20f88ce115938f1f2ccbf25a78f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 00:56:53.262222 131854 start.go:365] acquiring machines lock for ingress-addon-legacy-270792: {Name:mk4840bd51ce9e92879b51fa6af485d250291115 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0229 00:56:53.262257 131854 start.go:369] acquired machines lock for "ingress-addon-legacy-270792" in 17.673µs
I0229 00:56:53.262274 131854 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-270792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-270792 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0229 00:56:53.262353 131854 start.go:125] createHost starting for "" (driver="kvm2")
I0229 00:56:53.264457 131854 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I0229 00:56:53.264620 131854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 00:56:53.264645 131854 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 00:56:53.279093 131854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38265
I0229 00:56:53.279633 131854 main.go:141] libmachine: () Calling .GetVersion
I0229 00:56:53.280231 131854 main.go:141] libmachine: Using API Version 1
I0229 00:56:53.280252 131854 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 00:56:53.280549 131854 main.go:141] libmachine: () Calling .GetMachineName
I0229 00:56:53.280741 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetMachineName
I0229 00:56:53.280877 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
I0229 00:56:53.281040 131854 start.go:159] libmachine.API.Create for "ingress-addon-legacy-270792" (driver="kvm2")
I0229 00:56:53.281085 131854 client.go:168] LocalClient.Create starting
I0229 00:56:53.281123 131854 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem
I0229 00:56:53.281166 131854 main.go:141] libmachine: Decoding PEM data...
I0229 00:56:53.281187 131854 main.go:141] libmachine: Parsing certificate...
I0229 00:56:53.281256 131854 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem
I0229 00:56:53.281283 131854 main.go:141] libmachine: Decoding PEM data...
I0229 00:56:53.281297 131854 main.go:141] libmachine: Parsing certificate...
I0229 00:56:53.281323 131854 main.go:141] libmachine: Running pre-create checks...
I0229 00:56:53.281338 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .PreCreateCheck
I0229 00:56:53.281673 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetConfigRaw
I0229 00:56:53.282070 131854 main.go:141] libmachine: Creating machine...
I0229 00:56:53.282086 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .Create
I0229 00:56:53.282224 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Creating KVM machine...
I0229 00:56:53.283367 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found existing default KVM network
I0229 00:56:53.285193 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:53.285019 131888 network.go:210] skipping subnet 192.168.39.0/24 that is reserved: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0229 00:56:53.285950 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:53.285892 131888 network.go:207] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000209360}
I0229 00:56:53.291376 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | trying to create private KVM network mk-ingress-addon-legacy-270792 192.168.50.0/24...
I0229 00:56:53.353383 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | private KVM network mk-ingress-addon-legacy-270792 192.168.50.0/24 created
I0229 00:56:53.353411 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:53.353342 131888 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18063-115328/.minikube
I0229 00:56:53.353426 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Setting up store path in /home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792 ...
I0229 00:56:53.353441 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Building disk image from file:///home/jenkins/minikube-integration/18063-115328/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
I0229 00:56:53.353637 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Downloading /home/jenkins/minikube-integration/18063-115328/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18063-115328/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
I0229 00:56:53.573971 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:53.573849 131888 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/id_rsa...
I0229 00:56:53.689655 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:53.689539 131888 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/ingress-addon-legacy-270792.rawdisk...
I0229 00:56:53.689689 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Writing magic tar header
I0229 00:56:53.689705 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Writing SSH key tar header
I0229 00:56:53.689719 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:53.689655 131888 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792 ...
I0229 00:56:53.689741 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792
I0229 00:56:53.689771 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792 (perms=drwx------)
I0229 00:56:53.689830 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328/.minikube/machines
I0229 00:56:53.689853 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328/.minikube
I0229 00:56:53.689860 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328/.minikube/machines (perms=drwxr-xr-x)
I0229 00:56:53.689870 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328/.minikube (perms=drwxr-xr-x)
I0229 00:56:53.689879 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Setting executable bit set on /home/jenkins/minikube-integration/18063-115328 (perms=drwxrwxr-x)
I0229 00:56:53.689888 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0229 00:56:53.689896 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0229 00:56:53.689903 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Creating domain...
I0229 00:56:53.689986 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-115328
I0229 00:56:53.690025 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0229 00:56:53.690044 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Checking permissions on dir: /home/jenkins
I0229 00:56:53.690061 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Checking permissions on dir: /home
I0229 00:56:53.690075 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Skipping /home - not owner
I0229 00:56:53.691100 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) define libvirt domain using xml:
I0229 00:56:53.691124 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <domain type='kvm'>
I0229 00:56:53.691134 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <name>ingress-addon-legacy-270792</name>
I0229 00:56:53.691142 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <memory unit='MiB'>4096</memory>
I0229 00:56:53.691151 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <vcpu>2</vcpu>
I0229 00:56:53.691169 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <features>
I0229 00:56:53.691188 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <acpi/>
I0229 00:56:53.691203 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <apic/>
I0229 00:56:53.691215 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <pae/>
I0229 00:56:53.691226 131854 main.go:141] libmachine: (ingress-addon-legacy-270792)
I0229 00:56:53.691239 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) </features>
I0229 00:56:53.691252 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <cpu mode='host-passthrough'>
I0229 00:56:53.691264 131854 main.go:141] libmachine: (ingress-addon-legacy-270792)
I0229 00:56:53.691273 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) </cpu>
I0229 00:56:53.691300 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <os>
I0229 00:56:53.691321 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <type>hvm</type>
I0229 00:56:53.691329 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <boot dev='cdrom'/>
I0229 00:56:53.691338 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <boot dev='hd'/>
I0229 00:56:53.691348 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <bootmenu enable='no'/>
I0229 00:56:53.691356 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) </os>
I0229 00:56:53.691365 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <devices>
I0229 00:56:53.691379 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <disk type='file' device='cdrom'>
I0229 00:56:53.691392 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <source file='/home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/boot2docker.iso'/>
I0229 00:56:53.691401 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <target dev='hdc' bus='scsi'/>
I0229 00:56:53.691410 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <readonly/>
I0229 00:56:53.691416 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) </disk>
I0229 00:56:53.691425 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <disk type='file' device='disk'>
I0229 00:56:53.691435 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <driver name='qemu' type='raw' cache='default' io='threads' />
I0229 00:56:53.691449 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <source file='/home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/ingress-addon-legacy-270792.rawdisk'/>
I0229 00:56:53.691461 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <target dev='hda' bus='virtio'/>
I0229 00:56:53.691496 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) </disk>
I0229 00:56:53.691520 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <interface type='network'>
I0229 00:56:53.691534 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <source network='mk-ingress-addon-legacy-270792'/>
I0229 00:56:53.691543 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <model type='virtio'/>
I0229 00:56:53.691557 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) </interface>
I0229 00:56:53.691568 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <interface type='network'>
I0229 00:56:53.691581 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <source network='default'/>
I0229 00:56:53.691591 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <model type='virtio'/>
I0229 00:56:53.691617 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) </interface>
I0229 00:56:53.691635 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <serial type='pty'>
I0229 00:56:53.691649 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <target port='0'/>
I0229 00:56:53.691660 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) </serial>
I0229 00:56:53.691671 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <console type='pty'>
I0229 00:56:53.691683 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <target type='serial' port='0'/>
I0229 00:56:53.691696 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) </console>
I0229 00:56:53.691712 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <rng model='virtio'>
I0229 00:56:53.691726 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) <backend model='random'>/dev/random</backend>
I0229 00:56:53.691737 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) </rng>
I0229 00:56:53.691748 131854 main.go:141] libmachine: (ingress-addon-legacy-270792)
I0229 00:56:53.691758 131854 main.go:141] libmachine: (ingress-addon-legacy-270792)
I0229 00:56:53.691770 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) </devices>
I0229 00:56:53.691784 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) </domain>
I0229 00:56:53.691800 131854 main.go:141] libmachine: (ingress-addon-legacy-270792)
I0229 00:56:53.695942 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:0f:73:06 in network default
I0229 00:56:53.697064 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Ensuring networks are active...
I0229 00:56:53.697089 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:56:53.697746 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Ensuring network default is active
I0229 00:56:53.698072 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Ensuring network mk-ingress-addon-legacy-270792 is active
I0229 00:56:53.698562 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Getting domain xml...
I0229 00:56:53.699192 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Creating domain...
I0229 00:56:54.884724 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Waiting to get IP...
I0229 00:56:54.885452 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:56:54.885857 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
I0229 00:56:54.885909 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:54.885846 131888 retry.go:31] will retry after 258.552427ms: waiting for machine to come up
I0229 00:56:55.146485 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:56:55.146940 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
I0229 00:56:55.146973 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:55.146893 131888 retry.go:31] will retry after 247.731338ms: waiting for machine to come up
I0229 00:56:55.396441 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:56:55.396855 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
I0229 00:56:55.396881 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:55.396799 131888 retry.go:31] will retry after 352.513436ms: waiting for machine to come up
I0229 00:56:55.751356 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:56:55.751829 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
I0229 00:56:55.751862 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:55.751786 131888 retry.go:31] will retry after 485.622043ms: waiting for machine to come up
I0229 00:56:56.239539 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:56:56.239979 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
I0229 00:56:56.240007 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:56.239930 131888 retry.go:31] will retry after 458.147456ms: waiting for machine to come up
I0229 00:56:56.699645 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:56:56.700004 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
I0229 00:56:56.700047 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:56.699978 131888 retry.go:31] will retry after 887.011958ms: waiting for machine to come up
I0229 00:56:57.589081 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:56:57.589501 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
I0229 00:56:57.589531 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:57.589448 131888 retry.go:31] will retry after 1.150502395s: waiting for machine to come up
I0229 00:56:58.741244 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:56:58.741603 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
I0229 00:56:58.741627 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:56:58.741558 131888 retry.go:31] will retry after 1.297235785s: waiting for machine to come up
I0229 00:57:00.040208 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:00.040569 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
I0229 00:57:00.040592 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:57:00.040526 131888 retry.go:31] will retry after 1.706919488s: waiting for machine to come up
I0229 00:57:01.749283 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:01.749749 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
I0229 00:57:01.749773 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:57:01.749690 131888 retry.go:31] will retry after 2.061316918s: waiting for machine to come up
I0229 00:57:03.812727 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:03.813159 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
I0229 00:57:03.813196 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:57:03.813108 131888 retry.go:31] will retry after 2.469155816s: waiting for machine to come up
I0229 00:57:06.285745 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:06.286135 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
I0229 00:57:06.286161 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:57:06.286092 131888 retry.go:31] will retry after 3.020885508s: waiting for machine to come up
I0229 00:57:09.308129 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:09.308482 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find current IP address of domain ingress-addon-legacy-270792 in network mk-ingress-addon-legacy-270792
I0229 00:57:09.308513 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | I0229 00:57:09.308419 131888 retry.go:31] will retry after 4.542039674s: waiting for machine to come up
I0229 00:57:13.852515 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:13.852978 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Found IP for machine: 192.168.50.187
I0229 00:57:13.853007 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has current primary IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:13.853017 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Reserving static IP address...
I0229 00:57:13.853403 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-270792", mac: "52:54:00:42:62:86", ip: "192.168.50.187"} in network mk-ingress-addon-legacy-270792
I0229 00:57:13.925337 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Getting to WaitForSSH function...
I0229 00:57:13.925401 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Reserved static IP address: 192.168.50.187
I0229 00:57:13.925433 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Waiting for SSH to be available...
I0229 00:57:13.927859 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:13.928312 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:minikube Clientid:01:52:54:00:42:62:86}
I0229 00:57:13.928344 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:13.928565 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Using SSH client type: external
I0229 00:57:13.928599 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/id_rsa (-rw-------)
I0229 00:57:13.928643 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/id_rsa -p 22] /usr/bin/ssh <nil>}
I0229 00:57:13.928663 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | About to run SSH command:
I0229 00:57:13.928682 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | exit 0
I0229 00:57:14.057426 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | SSH cmd err, output: <nil>:
I0229 00:57:14.057895 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) KVM machine creation complete!
I0229 00:57:14.058157 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetConfigRaw
I0229 00:57:14.058853 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
I0229 00:57:14.059079 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
I0229 00:57:14.059255 131854 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0229 00:57:14.059274 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetState
I0229 00:57:14.060580 131854 main.go:141] libmachine: Detecting operating system of created instance...
I0229 00:57:14.060595 131854 main.go:141] libmachine: Waiting for SSH to be available...
I0229 00:57:14.060600 131854 main.go:141] libmachine: Getting to WaitForSSH function...
I0229 00:57:14.060606 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
I0229 00:57:14.062536 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:14.062855 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
I0229 00:57:14.062887 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:14.063052 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
I0229 00:57:14.063250 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:14.063426 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:14.063539 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
I0229 00:57:14.063707 131854 main.go:141] libmachine: Using SSH client type: native
I0229 00:57:14.063945 131854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.50.187 22 <nil> <nil>}
I0229 00:57:14.063959 131854 main.go:141] libmachine: About to run SSH command:
exit 0
I0229 00:57:14.173151 131854 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0229 00:57:14.173173 131854 main.go:141] libmachine: Detecting the provisioner...
I0229 00:57:14.173185 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
I0229 00:57:14.175915 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:14.176247 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
I0229 00:57:14.176279 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:14.176481 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
I0229 00:57:14.176694 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:14.176894 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:14.177043 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
I0229 00:57:14.177182 131854 main.go:141] libmachine: Using SSH client type: native
I0229 00:57:14.177342 131854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.50.187 22 <nil> <nil>}
I0229 00:57:14.177353 131854 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0229 00:57:14.286577 131854 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0229 00:57:14.286641 131854 main.go:141] libmachine: found compatible host: buildroot
I0229 00:57:14.286648 131854 main.go:141] libmachine: Provisioning with buildroot...
I0229 00:57:14.286656 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetMachineName
I0229 00:57:14.286924 131854 buildroot.go:166] provisioning hostname "ingress-addon-legacy-270792"
I0229 00:57:14.286955 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetMachineName
I0229 00:57:14.287161 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
I0229 00:57:14.289612 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:14.289966 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
I0229 00:57:14.289997 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:14.290121 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
I0229 00:57:14.290305 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:14.290464 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:14.290603 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
I0229 00:57:14.290786 131854 main.go:141] libmachine: Using SSH client type: native
I0229 00:57:14.290951 131854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.50.187 22 <nil> <nil>}
I0229 00:57:14.290964 131854 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-270792 && echo "ingress-addon-legacy-270792" | sudo tee /etc/hostname
I0229 00:57:14.412326 131854 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-270792
I0229 00:57:14.412368 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
I0229 00:57:14.415089 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:14.415380 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
I0229 00:57:14.415415 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:14.415632 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
I0229 00:57:14.415826 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:14.415997 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:14.416263 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
I0229 00:57:14.416489 131854 main.go:141] libmachine: Using SSH client type: native
I0229 00:57:14.416704 131854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.50.187 22 <nil> <nil>}
I0229 00:57:14.416725 131854 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-270792' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-270792/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-270792' | sudo tee -a /etc/hosts;
fi
fi
I0229 00:57:14.536262 131854 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0229 00:57:14.536292 131854 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-115328/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-115328/.minikube}
I0229 00:57:14.536309 131854 buildroot.go:174] setting up certificates
I0229 00:57:14.536321 131854 provision.go:83] configureAuth start
I0229 00:57:14.536331 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetMachineName
I0229 00:57:14.536676 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetIP
I0229 00:57:14.539109 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:14.539499 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
I0229 00:57:14.539541 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:14.539650 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
I0229 00:57:14.541833 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:14.542199 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
I0229 00:57:14.542220 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:14.542356 131854 provision.go:138] copyHostCerts
I0229 00:57:14.542389 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem
I0229 00:57:14.542428 131854 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem, removing ...
I0229 00:57:14.542447 131854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem
I0229 00:57:14.542525 131854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/ca.pem (1078 bytes)
I0229 00:57:14.542614 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem
I0229 00:57:14.542637 131854 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem, removing ...
I0229 00:57:14.542648 131854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem
I0229 00:57:14.542684 131854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/cert.pem (1123 bytes)
I0229 00:57:14.542744 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem
I0229 00:57:14.542768 131854 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem, removing ...
I0229 00:57:14.542777 131854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem
I0229 00:57:14.542808 131854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-115328/.minikube/key.pem (1679 bytes)
I0229 00:57:14.542872 131854 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-270792 san=[192.168.50.187 192.168.50.187 localhost 127.0.0.1 minikube ingress-addon-legacy-270792]
I0229 00:57:14.736454 131854 provision.go:172] copyRemoteCerts
I0229 00:57:14.736518 131854 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0229 00:57:14.736545 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
I0229 00:57:14.739491 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:14.739827 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
I0229 00:57:14.739858 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:14.740008 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
I0229 00:57:14.740267 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:14.740450 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
I0229 00:57:14.740611 131854 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/id_rsa Username:docker}
I0229 00:57:14.825228 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0229 00:57:14.825299 131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0229 00:57:14.850239 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0229 00:57:14.850323 131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0229 00:57:14.874442 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem -> /etc/docker/server.pem
I0229 00:57:14.874511 131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
I0229 00:57:14.898203 131854 provision.go:86] duration metric: configureAuth took 361.866166ms
I0229 00:57:14.898233 131854 buildroot.go:189] setting minikube options for container-runtime
I0229 00:57:14.898450 131854 config.go:182] Loaded profile config "ingress-addon-legacy-270792": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0229 00:57:14.898480 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
I0229 00:57:14.898788 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
I0229 00:57:14.901489 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:14.901915 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
I0229 00:57:14.901945 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:14.902118 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
I0229 00:57:14.902302 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:14.902513 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:14.902695 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
I0229 00:57:14.902881 131854 main.go:141] libmachine: Using SSH client type: native
I0229 00:57:14.903046 131854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.50.187 22 <nil> <nil>}
I0229 00:57:14.903058 131854 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0229 00:57:15.011377 131854 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0229 00:57:15.011401 131854 buildroot.go:70] root file system type: tmpfs
I0229 00:57:15.011541 131854 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0229 00:57:15.011571 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
I0229 00:57:15.014048 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:15.014353 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
I0229 00:57:15.014381 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:15.014592 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
I0229 00:57:15.014776 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:15.014938 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:15.015112 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
I0229 00:57:15.015257 131854 main.go:141] libmachine: Using SSH client type: native
I0229 00:57:15.015456 131854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.50.187 22 <nil> <nil>}
I0229 00:57:15.015543 131854 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0229 00:57:15.140512 131854 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0229 00:57:15.140542 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
I0229 00:57:15.143344 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:15.143700 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
I0229 00:57:15.143730 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:15.143895 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
I0229 00:57:15.144095 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:15.144297 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:15.144403 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
I0229 00:57:15.144595 131854 main.go:141] libmachine: Using SSH client type: native
I0229 00:57:15.144751 131854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.50.187 22 <nil> <nil>}
I0229 00:57:15.144766 131854 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0229 00:57:15.917893 131854 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0229 00:57:15.917915 131854 main.go:141] libmachine: Checking connection to Docker...
I0229 00:57:15.917925 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetURL
I0229 00:57:15.919252 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | Using libvirt version 6000000
I0229 00:57:15.921561 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:15.921916 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
I0229 00:57:15.921953 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:15.922143 131854 main.go:141] libmachine: Docker is up and running!
I0229 00:57:15.922161 131854 main.go:141] libmachine: Reticulating splines...
I0229 00:57:15.922170 131854 client.go:171] LocalClient.Create took 22.641073032s
I0229 00:57:15.922196 131854 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-270792" took 22.64115801s
I0229 00:57:15.922208 131854 start.go:300] post-start starting for "ingress-addon-legacy-270792" (driver="kvm2")
I0229 00:57:15.922221 131854 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0229 00:57:15.922240 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
I0229 00:57:15.922541 131854 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0229 00:57:15.922565 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
I0229 00:57:15.924877 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:15.925176 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
I0229 00:57:15.925205 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:15.925302 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
I0229 00:57:15.925498 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:15.925668 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
I0229 00:57:15.925827 131854 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/id_rsa Username:docker}
I0229 00:57:16.013245 131854 ssh_runner.go:195] Run: cat /etc/os-release
I0229 00:57:16.017500 131854 info.go:137] Remote host: Buildroot 2023.02.9
I0229 00:57:16.017525 131854 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-115328/.minikube/addons for local assets ...
I0229 00:57:16.017588 131854 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-115328/.minikube/files for local assets ...
I0229 00:57:16.017675 131854 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem -> 1225952.pem in /etc/ssl/certs
I0229 00:57:16.017688 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem -> /etc/ssl/certs/1225952.pem
I0229 00:57:16.017769 131854 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0229 00:57:16.027853 131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem --> /etc/ssl/certs/1225952.pem (1708 bytes)
I0229 00:57:16.052066 131854 start.go:303] post-start completed in 129.839713ms
I0229 00:57:16.052118 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetConfigRaw
I0229 00:57:16.052663 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetIP
I0229 00:57:16.055188 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:16.055574 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
I0229 00:57:16.055598 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:16.055830 131854 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/config.json ...
I0229 00:57:16.056038 131854 start.go:128] duration metric: createHost completed in 22.793665743s
I0229 00:57:16.056071 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
I0229 00:57:16.058312 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:16.058654 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
I0229 00:57:16.058681 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:16.058810 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
I0229 00:57:16.058981 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:16.059157 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:16.059296 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
I0229 00:57:16.059473 131854 main.go:141] libmachine: Using SSH client type: native
I0229 00:57:16.059634 131854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.50.187 22 <nil> <nil>}
I0229 00:57:16.059644 131854 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0229 00:57:16.170405 131854 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709168236.142294246
I0229 00:57:16.170431 131854 fix.go:206] guest clock: 1709168236.142294246
I0229 00:57:16.170438 131854 fix.go:219] Guest: 2024-02-29 00:57:16.142294246 +0000 UTC Remote: 2024-02-29 00:57:16.056052898 +0000 UTC m=+28.099227171 (delta=86.241348ms)
I0229 00:57:16.170459 131854 fix.go:190] guest clock delta is within tolerance: 86.241348ms
I0229 00:57:16.170464 131854 start.go:83] releasing machines lock for "ingress-addon-legacy-270792", held for 22.908197836s
I0229 00:57:16.170483 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
I0229 00:57:16.170749 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetIP
I0229 00:57:16.173371 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:16.173661 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
I0229 00:57:16.173699 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:16.173857 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
I0229 00:57:16.174510 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
I0229 00:57:16.174712 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .DriverName
I0229 00:57:16.174773 131854 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0229 00:57:16.174833 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
I0229 00:57:16.174976 131854 ssh_runner.go:195] Run: cat /version.json
I0229 00:57:16.175003 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHHostname
I0229 00:57:16.177483 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:16.177710 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:16.177859 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
I0229 00:57:16.177890 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:16.178032 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
I0229 00:57:16.178128 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
I0229 00:57:16.178163 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:16.178230 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:16.178278 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHPort
I0229 00:57:16.178425 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
I0229 00:57:16.178497 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHKeyPath
I0229 00:57:16.178578 131854 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/id_rsa Username:docker}
I0229 00:57:16.178667 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetSSHUsername
I0229 00:57:16.178780 131854 sshutil.go:53] new ssh client: &{IP:192.168.50.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-115328/.minikube/machines/ingress-addon-legacy-270792/id_rsa Username:docker}
I0229 00:57:16.279726 131854 ssh_runner.go:195] Run: systemctl --version
I0229 00:57:16.285982 131854 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0229 00:57:16.291689 131854 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0229 00:57:16.291760 131854 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0229 00:57:16.301707 131854 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0229 00:57:16.321505 131854 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0229 00:57:16.321540 131854 start.go:475] detecting cgroup driver to use...
I0229 00:57:16.321661 131854 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0229 00:57:16.346146 131854 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0229 00:57:16.360056 131854 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0229 00:57:16.371726 131854 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0229 00:57:16.371802 131854 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0229 00:57:16.382190 131854 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0229 00:57:16.392089 131854 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0229 00:57:16.402511 131854 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0229 00:57:16.412807 131854 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0229 00:57:16.423080 131854 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0229 00:57:16.433206 131854 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0229 00:57:16.442261 131854 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0229 00:57:16.451149 131854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0229 00:57:16.560936 131854 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0229 00:57:16.588457 131854 start.go:475] detecting cgroup driver to use...
I0229 00:57:16.588562 131854 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0229 00:57:16.608488 131854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0229 00:57:16.622636 131854 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0229 00:57:16.641147 131854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0229 00:57:16.654317 131854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0229 00:57:16.666793 131854 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0229 00:57:16.696033 131854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0229 00:57:16.709644 131854 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0229 00:57:16.728600 131854 ssh_runner.go:195] Run: which cri-dockerd
I0229 00:57:16.732535 131854 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0229 00:57:16.741858 131854 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0229 00:57:16.759377 131854 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0229 00:57:16.872931 131854 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0229 00:57:17.004655 131854 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0229 00:57:17.004799 131854 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0229 00:57:17.022087 131854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0229 00:57:17.135572 131854 ssh_runner.go:195] Run: sudo systemctl restart docker
I0229 00:57:18.509258 131854 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.373650374s)
I0229 00:57:18.509336 131854 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0229 00:57:18.539017 131854 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0229 00:57:18.565331 131854 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
I0229 00:57:18.565376 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) Calling .GetIP
I0229 00:57:18.567984 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:18.568359 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:62:86", ip: ""} in network mk-ingress-addon-legacy-270792: {Iface:virbr1 ExpiryTime:2024-02-29 01:57:07 +0000 UTC Type:0 Mac:52:54:00:42:62:86 Iaid: IPaddr:192.168.50.187 Prefix:24 Hostname:ingress-addon-legacy-270792 Clientid:01:52:54:00:42:62:86}
I0229 00:57:18.568385 131854 main.go:141] libmachine: (ingress-addon-legacy-270792) DBG | domain ingress-addon-legacy-270792 has defined IP address 192.168.50.187 and MAC address 52:54:00:42:62:86 in network mk-ingress-addon-legacy-270792
I0229 00:57:18.568569 131854 ssh_runner.go:195] Run: grep 192.168.50.1 host.minikube.internal$ /etc/hosts
I0229 00:57:18.572910 131854 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0229 00:57:18.585836 131854 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0229 00:57:18.585886 131854 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0229 00:57:18.601577 131854 docker.go:685] Got preloaded images:
I0229 00:57:18.601594 131854 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0229 00:57:18.601642 131854 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0229 00:57:18.611265 131854 ssh_runner.go:195] Run: which lz4
I0229 00:57:18.615078 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0229 00:57:18.615169 131854 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0229 00:57:18.619557 131854 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0229 00:57:18.619592 131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
I0229 00:57:20.100666 131854 docker.go:649] Took 1.485514 seconds to copy over tarball
I0229 00:57:20.100758 131854 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0229 00:57:22.276056 131854 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.175257365s)
I0229 00:57:22.276096 131854 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0229 00:57:22.315638 131854 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0229 00:57:22.326621 131854 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
I0229 00:57:22.345988 131854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0229 00:57:22.468935 131854 ssh_runner.go:195] Run: sudo systemctl restart docker
I0229 00:57:26.881060 131854 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.412082706s)
I0229 00:57:26.881145 131854 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0229 00:57:26.898886 131854 docker.go:685] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0229 00:57:26.898905 131854 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0229 00:57:26.898914 131854 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
I0229 00:57:26.900531 131854 image.go:134] retrieving image: registry.k8s.io/pause:3.2
I0229 00:57:26.900549 131854 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
I0229 00:57:26.900531 131854 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
I0229 00:57:26.900609 131854 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0229 00:57:26.900531 131854 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
I0229 00:57:26.900540 131854 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I0229 00:57:26.900531 131854 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
I0229 00:57:26.900546 131854 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
I0229 00:57:26.901543 131854 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
I0229 00:57:26.901603 131854 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0229 00:57:26.901620 131854 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
I0229 00:57:26.901657 131854 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
I0229 00:57:26.901711 131854 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
I0229 00:57:26.901657 131854 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I0229 00:57:26.901542 131854 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
I0229 00:57:26.901542 131854 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
I0229 00:57:27.034919 131854 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
I0229 00:57:27.045554 131854 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
I0229 00:57:27.047766 131854 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
I0229 00:57:27.053166 131854 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
I0229 00:57:27.054047 131854 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
I0229 00:57:27.054098 131854 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
I0229 00:57:27.054139 131854 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
I0229 00:57:27.058802 131854 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
I0229 00:57:27.063213 131854 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
I0229 00:57:27.070023 131854 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
I0229 00:57:27.070077 131854 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
I0229 00:57:27.070121 131854 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
I0229 00:57:27.083565 131854 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
I0229 00:57:27.100745 131854 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
I0229 00:57:27.100796 131854 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
I0229 00:57:27.100839 131854 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
I0229 00:57:27.123507 131854 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I0229 00:57:27.124344 131854 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
I0229 00:57:27.124385 131854 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
I0229 00:57:27.124426 131854 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
I0229 00:57:27.134567 131854 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
I0229 00:57:27.134602 131854 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
I0229 00:57:27.134630 131854 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
I0229 00:57:27.134643 131854 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
I0229 00:57:27.134778 131854 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
I0229 00:57:27.134816 131854 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
I0229 00:57:27.134874 131854 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
I0229 00:57:27.144231 131854 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0229 00:57:27.144267 131854 docker.go:337] Removing image: registry.k8s.io/pause:3.2
I0229 00:57:27.144308 131854 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
I0229 00:57:27.160261 131854 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
I0229 00:57:27.177560 131854 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
I0229 00:57:27.178957 131854 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
I0229 00:57:27.185048 131854 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
I0229 00:57:27.190009 131854 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
I0229 00:57:27.487476 131854 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0229 00:57:27.505673 131854 cache_images.go:92] LoadImages completed in 606.743591ms
W0229 00:57:27.505767 131854 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-115328/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
I0229 00:57:27.505853 131854 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0229 00:57:27.535292 131854 cni.go:84] Creating CNI manager for ""
I0229 00:57:27.535311 131854 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0229 00:57:27.535345 131854 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0229 00:57:27.535370 131854 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.187 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-270792 NodeName:ingress-addon-legacy-270792 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0229 00:57:27.535539 131854 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.50.187
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-270792"
kubeletExtraArgs:
node-ip: 192.168.50.187
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.50.187"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0229 00:57:27.535622 131854 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-270792 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.187
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-270792 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0229 00:57:27.535680 131854 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0229 00:57:27.545659 131854 binaries.go:44] Found k8s binaries, skipping transfer
I0229 00:57:27.545721 131854 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0229 00:57:27.556011 131854 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (356 bytes)
I0229 00:57:27.572878 131854 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0229 00:57:27.589683 131854 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2130 bytes)
I0229 00:57:27.606173 131854 ssh_runner.go:195] Run: grep 192.168.50.187 control-plane.minikube.internal$ /etc/hosts
I0229 00:57:27.609949 131854 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.187 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0229 00:57:27.621658 131854 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792 for IP: 192.168.50.187
I0229 00:57:27.621694 131854 certs.go:190] acquiring lock for shared ca certs: {Name:mkeeef7429d1e308d27d608f1ba62d5b46b59bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 00:57:27.621877 131854 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key
I0229 00:57:27.621915 131854 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key
I0229 00:57:27.621957 131854 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/client.key
I0229 00:57:27.621969 131854 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/client.crt with IP's: []
I0229 00:57:27.812961 131854 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/client.crt ...
I0229 00:57:27.812993 131854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/client.crt: {Name:mkfd2e599baea25b414b240f7a7347f9b074f404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 00:57:27.813155 131854 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/client.key ...
I0229 00:57:27.813171 131854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/client.key: {Name:mk7893dc33425ec30964686ef54c96a435eef65d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 00:57:27.813244 131854 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.key.ff890e79
I0229 00:57:27.813261 131854 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.crt.ff890e79 with IP's: [192.168.50.187 10.96.0.1 127.0.0.1 10.0.0.1]
I0229 00:57:27.952871 131854 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.crt.ff890e79 ...
I0229 00:57:27.952909 131854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.crt.ff890e79: {Name:mk8d840108f5ce5b36775ceb882186179d17da57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 00:57:27.953063 131854 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.key.ff890e79 ...
I0229 00:57:27.953077 131854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.key.ff890e79: {Name:mk7e29416ae2d5d7bbd1b81391d721d2e3fb8793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 00:57:27.953144 131854 certs.go:337] copying /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.crt.ff890e79 -> /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.crt
I0229 00:57:27.953210 131854 certs.go:341] copying /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.key.ff890e79 -> /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.key
I0229 00:57:27.953258 131854 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.key
I0229 00:57:27.953271 131854 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.crt with IP's: []
I0229 00:57:28.158575 131854 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.crt ...
I0229 00:57:28.158611 131854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.crt: {Name:mk53acca5cc64b570a221260774610c3bc74e1fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 00:57:28.158767 131854 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.key ...
I0229 00:57:28.158781 131854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.key: {Name:mkf4026af2f407907d1b5d938a2e5a7f64e813eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 00:57:28.158851 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0229 00:57:28.158877 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0229 00:57:28.158890 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0229 00:57:28.158902 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0229 00:57:28.158917 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0229 00:57:28.158929 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0229 00:57:28.158941 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0229 00:57:28.158952 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0229 00:57:28.159003 131854 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem (1338 bytes)
W0229 00:57:28.159040 131854 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595_empty.pem, impossibly tiny 0 bytes
I0229 00:57:28.159049 131854 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca-key.pem (1675 bytes)
I0229 00:57:28.159075 131854 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/ca.pem (1078 bytes)
I0229 00:57:28.159097 131854 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/cert.pem (1123 bytes)
I0229 00:57:28.159117 131854 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/home/jenkins/minikube-integration/18063-115328/.minikube/certs/key.pem (1679 bytes)
I0229 00:57:28.159152 131854 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem (1708 bytes)
I0229 00:57:28.159179 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0229 00:57:28.159192 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem -> /usr/share/ca-certificates/122595.pem
I0229 00:57:28.159204 131854 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem -> /usr/share/ca-certificates/1225952.pem
I0229 00:57:28.159840 131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0229 00:57:28.186394 131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0229 00:57:28.210802 131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0229 00:57:28.235263 131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/profiles/ingress-addon-legacy-270792/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0229 00:57:28.259408 131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0229 00:57:28.283009 131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0229 00:57:28.306973 131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0229 00:57:28.330869 131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0229 00:57:28.355208 131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0229 00:57:28.379227 131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/certs/122595.pem --> /usr/share/ca-certificates/122595.pem (1338 bytes)
I0229 00:57:28.403192 131854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-115328/.minikube/files/etc/ssl/certs/1225952.pem --> /usr/share/ca-certificates/1225952.pem (1708 bytes)
I0229 00:57:28.426826 131854 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0229 00:57:28.443292 131854 ssh_runner.go:195] Run: openssl version
I0229 00:57:28.448788 131854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1225952.pem && ln -fs /usr/share/ca-certificates/1225952.pem /etc/ssl/certs/1225952.pem"
I0229 00:57:28.459987 131854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1225952.pem
I0229 00:57:28.464527 131854 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 00:52 /usr/share/ca-certificates/1225952.pem
I0229 00:57:28.464581 131854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1225952.pem
I0229 00:57:28.470312 131854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1225952.pem /etc/ssl/certs/3ec20f2e.0"
I0229 00:57:28.481659 131854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0229 00:57:28.492818 131854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0229 00:57:28.497442 131854 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 00:47 /usr/share/ca-certificates/minikubeCA.pem
I0229 00:57:28.497493 131854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0229 00:57:28.503104 131854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0229 00:57:28.513537 131854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/122595.pem && ln -fs /usr/share/ca-certificates/122595.pem /etc/ssl/certs/122595.pem"
I0229 00:57:28.524326 131854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/122595.pem
I0229 00:57:28.528850 131854 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 00:52 /usr/share/ca-certificates/122595.pem
I0229 00:57:28.528912 131854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/122595.pem
I0229 00:57:28.534705 131854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/122595.pem /etc/ssl/certs/51391683.0"
I0229 00:57:28.545369 131854 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0229 00:57:28.549706 131854 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0229 00:57:28.549750 131854 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-270792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-270792 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.187 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0229 00:57:28.549924 131854 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0229 00:57:28.567151 131854 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0229 00:57:28.576781 131854 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0229 00:57:28.586034 131854 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0229 00:57:28.595191 131854 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0229 00:57:28.595229 131854 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
I0229 00:57:28.643915 131854 kubeadm.go:322] W0229 00:57:28.620093 1365 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0229 00:57:28.728462 131854 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0229 00:57:28.759062 131854 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
I0229 00:57:28.831499 131854 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0229 00:57:31.463471 131854 kubeadm.go:322] W0229 00:57:31.440525 1365 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0229 00:57:31.464345 131854 kubeadm.go:322] W0229 00:57:31.441532 1365 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0229 00:59:26.459443 131854 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0229 00:59:26.459610 131854 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0229 00:59:26.460568 131854 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0229 00:59:26.460647 131854 kubeadm.go:322] [preflight] Running pre-flight checks
I0229 00:59:26.460744 131854 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0229 00:59:26.460860 131854 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0229 00:59:26.460976 131854 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0229 00:59:26.461082 131854 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0229 00:59:26.461157 131854 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0229 00:59:26.461212 131854 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0229 00:59:26.461281 131854 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0229 00:59:26.463087 131854 out.go:204] - Generating certificates and keys ...
I0229 00:59:26.463179 131854 kubeadm.go:322] [certs] Using existing ca certificate authority
I0229 00:59:26.463245 131854 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0229 00:59:26.463344 131854 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0229 00:59:26.463395 131854 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0229 00:59:26.463470 131854 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0229 00:59:26.463536 131854 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0229 00:59:26.463606 131854 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0229 00:59:26.463712 131854 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-270792 localhost] and IPs [192.168.50.187 127.0.0.1 ::1]
I0229 00:59:26.463758 131854 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0229 00:59:26.463888 131854 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-270792 localhost] and IPs [192.168.50.187 127.0.0.1 ::1]
I0229 00:59:26.463955 131854 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0229 00:59:26.464017 131854 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0229 00:59:26.464057 131854 kubeadm.go:322] [certs] Generating "sa" key and public key
I0229 00:59:26.464103 131854 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0229 00:59:26.464149 131854 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0229 00:59:26.464197 131854 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0229 00:59:26.464252 131854 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0229 00:59:26.464302 131854 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0229 00:59:26.464357 131854 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0229 00:59:26.465890 131854 out.go:204] - Booting up control plane ...
I0229 00:59:26.465986 131854 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0229 00:59:26.466068 131854 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0229 00:59:26.466142 131854 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0229 00:59:26.466213 131854 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0229 00:59:26.466347 131854 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0229 00:59:26.466392 131854 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0229 00:59:26.466477 131854 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 00:59:26.466641 131854 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 00:59:26.466708 131854 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 00:59:26.466877 131854 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 00:59:26.466936 131854 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 00:59:26.467100 131854 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 00:59:26.467157 131854 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 00:59:26.467338 131854 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 00:59:26.467434 131854 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 00:59:26.467625 131854 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 00:59:26.467634 131854 kubeadm.go:322]
I0229 00:59:26.467690 131854 kubeadm.go:322] Unfortunately, an error has occurred:
I0229 00:59:26.467749 131854 kubeadm.go:322] timed out waiting for the condition
I0229 00:59:26.467763 131854 kubeadm.go:322]
I0229 00:59:26.467818 131854 kubeadm.go:322] This error is likely caused by:
I0229 00:59:26.467867 131854 kubeadm.go:322] - The kubelet is not running
I0229 00:59:26.467971 131854 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0229 00:59:26.467982 131854 kubeadm.go:322]
I0229 00:59:26.468070 131854 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0229 00:59:26.468105 131854 kubeadm.go:322] - 'systemctl status kubelet'
I0229 00:59:26.468133 131854 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0229 00:59:26.468139 131854 kubeadm.go:322]
I0229 00:59:26.468264 131854 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0229 00:59:26.468372 131854 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0229 00:59:26.468388 131854 kubeadm.go:322]
I0229 00:59:26.468499 131854 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0229 00:59:26.468574 131854 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0229 00:59:26.468672 131854 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0229 00:59:26.468734 131854 kubeadm.go:322] - 'docker logs CONTAINERID'
I0229 00:59:26.468773 131854 kubeadm.go:322]
W0229 00:59:26.468920 131854 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-270792 localhost] and IPs [192.168.50.187 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-270792 localhost] and IPs [192.168.50.187 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0229 00:57:28.620093 1365 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 00:57:31.440525 1365 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 00:57:31.441532 1365 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-270792 localhost] and IPs [192.168.50.187 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-270792 localhost] and IPs [192.168.50.187 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0229 00:57:28.620093 1365 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 00:57:31.440525 1365 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 00:57:31.441532 1365 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0229 00:59:26.469013 131854 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0229 00:59:27.204472 131854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0229 00:59:27.219158 131854 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0229 00:59:27.228498 131854 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0229 00:59:27.228535 131854 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
I0229 00:59:27.283617 131854 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0229 00:59:27.283721 131854 kubeadm.go:322] [preflight] Running pre-flight checks
I0229 00:59:27.484259 131854 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0229 00:59:27.484363 131854 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0229 00:59:27.484515 131854 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0229 00:59:27.626940 131854 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0229 00:59:27.628113 131854 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0229 00:59:27.628191 131854 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0229 00:59:27.756916 131854 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0229 00:59:27.759155 131854 out.go:204] - Generating certificates and keys ...
I0229 00:59:27.759258 131854 kubeadm.go:322] [certs] Using existing ca certificate authority
I0229 00:59:27.759348 131854 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0229 00:59:27.759449 131854 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0229 00:59:27.759526 131854 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0229 00:59:27.759616 131854 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0229 00:59:27.759690 131854 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0229 00:59:27.759802 131854 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0229 00:59:27.759903 131854 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0229 00:59:27.760008 131854 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0229 00:59:27.763180 131854 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0229 00:59:27.763245 131854 kubeadm.go:322] [certs] Using the existing "sa" key
I0229 00:59:27.763345 131854 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0229 00:59:27.894369 131854 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0229 00:59:28.208408 131854 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0229 00:59:28.436268 131854 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0229 00:59:28.804982 131854 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0229 00:59:28.805742 131854 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0229 00:59:28.807665 131854 out.go:204] - Booting up control plane ...
I0229 00:59:28.807763 131854 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0229 00:59:28.813940 131854 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0229 00:59:28.821265 131854 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0229 00:59:28.822192 131854 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0229 00:59:28.824137 131854 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0229 01:00:08.826388 131854 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0229 01:00:08.827564 131854 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 01:00:08.829307 131854 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 01:00:13.828536 131854 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 01:00:13.828742 131854 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 01:00:23.829347 131854 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 01:00:23.829567 131854 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 01:03:28.827609 131854 kubeadm.go:322]
I0229 01:03:28.827706 131854 kubeadm.go:322] Unfortunately, an error has occurred:
I0229 01:03:28.827760 131854 kubeadm.go:322] timed out waiting for the condition
I0229 01:03:28.827786 131854 kubeadm.go:322]
I0229 01:03:28.827823 131854 kubeadm.go:322] This error is likely caused by:
I0229 01:03:28.827911 131854 kubeadm.go:322] - The kubelet is not running
I0229 01:03:28.828089 131854 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0229 01:03:28.828104 131854 kubeadm.go:322]
I0229 01:03:28.828222 131854 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0229 01:03:28.828283 131854 kubeadm.go:322] - 'systemctl status kubelet'
I0229 01:03:28.828341 131854 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0229 01:03:28.828354 131854 kubeadm.go:322]
I0229 01:03:28.828491 131854 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0229 01:03:28.828594 131854 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0229 01:03:28.828602 131854 kubeadm.go:322]
I0229 01:03:28.828734 131854 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0229 01:03:28.828822 131854 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0229 01:03:28.828930 131854 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0229 01:03:28.828988 131854 kubeadm.go:322] - 'docker logs CONTAINERID'
I0229 01:03:28.828999 131854 kubeadm.go:322]
I0229 01:03:28.829686 131854 kubeadm.go:322] W0229 00:59:27.271908 18083 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0229 01:03:28.829952 131854 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0229 01:03:28.830135 131854 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
I0229 01:03:28.830307 131854 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0229 01:03:28.830498 131854 kubeadm.go:322] W0229 00:59:28.809110 18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0229 01:03:28.830621 131854 kubeadm.go:322] W0229 00:59:28.810288 18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0229 01:03:28.830737 131854 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0229 01:03:28.830828 131854 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0229 01:03:28.830955 131854 kubeadm.go:406] StartCluster complete in 6m0.281208007s
I0229 01:03:28.831123 131854 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0229 01:03:28.848565 131854 logs.go:276] 0 containers: []
W0229 01:03:28.848583 131854 logs.go:278] No container was found matching "kube-apiserver"
I0229 01:03:28.848639 131854 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0229 01:03:28.866889 131854 logs.go:276] 0 containers: []
W0229 01:03:28.866913 131854 logs.go:278] No container was found matching "etcd"
I0229 01:03:28.866978 131854 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0229 01:03:28.884032 131854 logs.go:276] 0 containers: []
W0229 01:03:28.884053 131854 logs.go:278] No container was found matching "coredns"
I0229 01:03:28.884113 131854 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0229 01:03:28.903440 131854 logs.go:276] 0 containers: []
W0229 01:03:28.903459 131854 logs.go:278] No container was found matching "kube-scheduler"
I0229 01:03:28.903508 131854 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0229 01:03:28.940982 131854 logs.go:276] 0 containers: []
W0229 01:03:28.941010 131854 logs.go:278] No container was found matching "kube-proxy"
I0229 01:03:28.941069 131854 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0229 01:03:28.966084 131854 logs.go:276] 0 containers: []
W0229 01:03:28.966112 131854 logs.go:278] No container was found matching "kube-controller-manager"
I0229 01:03:28.966171 131854 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0229 01:03:28.989010 131854 logs.go:276] 0 containers: []
W0229 01:03:28.989034 131854 logs.go:278] No container was found matching "kindnet"
I0229 01:03:28.989051 131854 logs.go:123] Gathering logs for kubelet ...
I0229 01:03:28.989067 131854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0229 01:03:29.023231 131854 logs.go:138] Found kubelet problem: Feb 29 01:03:21 ingress-addon-legacy-270792 kubelet[51519]: F0229 01:03:21.454607 51519 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 01:03:29.034129 131854 logs.go:138] Found kubelet problem: Feb 29 01:03:22 ingress-addon-legacy-270792 kubelet[51702]: F0229 01:03:22.668129 51702 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 01:03:29.043868 131854 logs.go:138] Found kubelet problem: Feb 29 01:03:23 ingress-addon-legacy-270792 kubelet[51880]: F0229 01:03:23.955336 51880 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 01:03:29.050484 131854 logs.go:138] Found kubelet problem: Feb 29 01:03:25 ingress-addon-legacy-270792 kubelet[52057]: F0229 01:03:25.205454 52057 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 01:03:29.057092 131854 logs.go:138] Found kubelet problem: Feb 29 01:03:26 ingress-addon-legacy-270792 kubelet[52234]: F0229 01:03:26.399816 52234 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 01:03:29.063787 131854 logs.go:138] Found kubelet problem: Feb 29 01:03:27 ingress-addon-legacy-270792 kubelet[52414]: F0229 01:03:27.714680 52414 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
I0229 01:03:29.069823 131854 logs.go:123] Gathering logs for dmesg ...
I0229 01:03:29.069840 131854 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0229 01:03:29.083597 131854 logs.go:123] Gathering logs for describe nodes ...
I0229 01:03:29.083621 131854 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0229 01:03:29.143162 131854 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0229 01:03:29.143185 131854 logs.go:123] Gathering logs for Docker ...
I0229 01:03:29.143203 131854 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0229 01:03:29.185146 131854 logs.go:123] Gathering logs for container status ...
I0229 01:03:29.185176 131854 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0229 01:03:29.237855 131854 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0229 00:59:27.271908 18083 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 00:59:28.809110 18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 00:59:28.810288 18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0229 01:03:29.237906 131854 out.go:239] *
*
W0229 01:03:29.238098 131854 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0229 00:59:27.271908 18083 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 00:59:28.809110 18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 00:59:28.810288 18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0229 00:59:27.271908 18083 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 00:59:28.809110 18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 00:59:28.810288 18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0229 01:03:29.238131 131854 out.go:239] *
*
W0229 01:03:29.238964 131854 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0229 01:03:29.241538 131854 out.go:177] X Problems detected in kubelet:
I0229 01:03:29.243125 131854 out.go:177] Feb 29 01:03:21 ingress-addon-legacy-270792 kubelet[51519]: F0229 01:03:21.454607 51519 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
I0229 01:03:29.244493 131854 out.go:177] Feb 29 01:03:22 ingress-addon-legacy-270792 kubelet[51702]: F0229 01:03:22.668129 51702 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
I0229 01:03:29.245719 131854 out.go:177] Feb 29 01:03:23 ingress-addon-legacy-270792 kubelet[51880]: F0229 01:03:23.955336 51880 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
I0229 01:03:29.248427 131854 out.go:177]
W0229 01:03:29.249789 131854 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0229 00:59:27.271908 18083 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 00:59:28.809110 18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 00:59:28.810288 18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0229 00:59:27.271908 18083 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 00:59:28.809110 18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 00:59:28.810288 18083 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0229 01:03:29.249842 131854 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0229 01:03:29.249860 131854 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0229 01:03:29.251429 131854 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-linux-amd64 start -p ingress-addon-legacy-270792 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (401.36s)