=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-linux-amd64 start -p ingress-addon-legacy-180742 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 --container-runtime=containerd
E0229 17:54:42.039856 13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 17:55:09.725866 13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/addons-771161/client.crt: no such file or directory
E0229 17:56:33.750734 13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:33.755978 13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:33.766223 13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:33.786453 13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:33.826723 13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:33.907072 13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:34.067487 13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:34.388102 13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:35.028989 13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:36.309235 13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:38.869817 13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:43.990011 13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:56:54.230982 13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
E0229 17:57:14.711337 13721 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/functional-296731/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ingress-addon-legacy-180742 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 --container-runtime=containerd: exit status 109 (4m57.071761245s)
-- stdout --
* [ingress-addon-legacy-180742] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=18259
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on user configuration
* Starting control plane node ingress-addon-legacy-180742 in cluster ingress-addon-legacy-180742
* Downloading Kubernetes v1.18.20 preload ...
* Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
* Preparing Kubernetes v1.18.20 on containerd 1.7.11 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
X Problems detected in kubelet:
Feb 29 17:57:27 ingress-addon-legacy-180742 kubelet[6000]: F0229 17:57:27.972664 6000 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
Feb 29 17:57:29 ingress-addon-legacy-180742 kubelet[6024]: F0229 17:57:29.256416 6024 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
Feb 29 17:57:30 ingress-addon-legacy-180742 kubelet[6050]: F0229 17:57:30.511750 6050 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
-- /stdout --
** stderr **
I0229 17:52:39.642457 22516 out.go:291] Setting OutFile to fd 1 ...
I0229 17:52:39.642724 22516 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:52:39.642734 22516 out.go:304] Setting ErrFile to fd 2...
I0229 17:52:39.642738 22516 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:52:39.642926 22516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6412/.minikube/bin
I0229 17:52:39.643491 22516 out.go:298] Setting JSON to false
I0229 17:52:39.644344 22516 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2101,"bootTime":1709227059,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0229 17:52:39.644413 22516 start.go:139] virtualization: kvm guest
I0229 17:52:39.647204 22516 out.go:177] * [ingress-addon-legacy-180742] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
I0229 17:52:39.648548 22516 out.go:177] - MINIKUBE_LOCATION=18259
I0229 17:52:39.648555 22516 notify.go:220] Checking for updates...
I0229 17:52:39.649934 22516 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0229 17:52:39.651432 22516 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/18259-6412/kubeconfig
I0229 17:52:39.652779 22516 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6412/.minikube
I0229 17:52:39.653832 22516 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0229 17:52:39.654905 22516 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0229 17:52:39.656166 22516 driver.go:392] Setting default libvirt URI to qemu:///system
I0229 17:52:39.689817 22516 out.go:177] * Using the kvm2 driver based on user configuration
I0229 17:52:39.691012 22516 start.go:299] selected driver: kvm2
I0229 17:52:39.691031 22516 start.go:903] validating driver "kvm2" against <nil>
I0229 17:52:39.691042 22516 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0229 17:52:39.691692 22516 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0229 17:52:39.691750 22516 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0229 17:52:39.705501 22516 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
I0229 17:52:39.705563 22516 start_flags.go:309] no existing cluster config was found, will generate one from the flags
I0229 17:52:39.705771 22516 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0229 17:52:39.705833 22516 cni.go:84] Creating CNI manager for ""
I0229 17:52:39.705846 22516 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0229 17:52:39.705853 22516 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0229 17:52:39.705863 22516 start_flags.go:323] config:
{Name:ingress-addon-legacy-180742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-180742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0229 17:52:39.705994 22516 iso.go:125] acquiring lock: {Name:mkfdba4e88687e074c733f44da0c4de025dfd4cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0229 17:52:39.707502 22516 out.go:177] * Starting control plane node ingress-addon-legacy-180742 in cluster ingress-addon-legacy-180742
I0229 17:52:39.708648 22516 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
I0229 17:52:39.861960 22516 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4
I0229 17:52:39.861992 22516 cache.go:56] Caching tarball of preloaded images
I0229 17:52:39.862129 22516 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
I0229 17:52:39.863836 22516 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0229 17:52:39.865010 22516 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 ...
I0229 17:52:40.022357 22516 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4?checksum=md5:b585eebe982180189fed21f0bd283cca -> /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4
I0229 17:53:03.002918 22516 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 ...
I0229 17:53:03.003014 22516 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 ...
I0229 17:53:04.051010 22516 cache.go:59] Finished verifying existence of preloaded tar for v1.18.20 on containerd
I0229 17:53:04.051318 22516 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/config.json ...
I0229 17:53:04.051346 22516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/config.json: {Name:mk35eb9355d8099644c0664e1cfbbd20444a3b11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 17:53:04.051520 22516 start.go:365] acquiring machines lock for ingress-addon-legacy-180742: {Name:mkf692a70c79b07a451e99e83525eaaa17684fbb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0229 17:53:04.051562 22516 start.go:369] acquired machines lock for "ingress-addon-legacy-180742" in 18.476µs
I0229 17:53:04.051579 22516 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-180742 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-180742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0229 17:53:04.051661 22516 start.go:125] createHost starting for "" (driver="kvm2")
I0229 17:53:04.054929 22516 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I0229 17:53:04.055077 22516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 17:53:04.055103 22516 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:53:04.069062 22516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41103
I0229 17:53:04.069506 22516 main.go:141] libmachine: () Calling .GetVersion
I0229 17:53:04.070001 22516 main.go:141] libmachine: Using API Version 1
I0229 17:53:04.070021 22516 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:53:04.070385 22516 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:53:04.070581 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetMachineName
I0229 17:53:04.070728 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
I0229 17:53:04.070850 22516 start.go:159] libmachine.API.Create for "ingress-addon-legacy-180742" (driver="kvm2")
I0229 17:53:04.070882 22516 client.go:168] LocalClient.Create starting
I0229 17:53:04.070914 22516 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem
I0229 17:53:04.070950 22516 main.go:141] libmachine: Decoding PEM data...
I0229 17:53:04.070971 22516 main.go:141] libmachine: Parsing certificate...
I0229 17:53:04.071025 22516 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem
I0229 17:53:04.071044 22516 main.go:141] libmachine: Decoding PEM data...
I0229 17:53:04.071055 22516 main.go:141] libmachine: Parsing certificate...
I0229 17:53:04.071072 22516 main.go:141] libmachine: Running pre-create checks...
I0229 17:53:04.071081 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .PreCreateCheck
I0229 17:53:04.071367 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetConfigRaw
I0229 17:53:04.071717 22516 main.go:141] libmachine: Creating machine...
I0229 17:53:04.071739 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .Create
I0229 17:53:04.071838 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Creating KVM machine...
I0229 17:53:04.073025 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found existing default KVM network
I0229 17:53:04.073659 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:04.073531 22601 network.go:207] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1c0}
I0229 17:53:04.078465 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | trying to create private KVM network mk-ingress-addon-legacy-180742 192.168.39.0/24...
I0229 17:53:04.140222 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Setting up store path in /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742 ...
I0229 17:53:04.140266 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | private KVM network mk-ingress-addon-legacy-180742 192.168.39.0/24 created
I0229 17:53:04.140279 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Building disk image from file:///home/jenkins/minikube-integration/18259-6412/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
I0229 17:53:04.140294 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:04.140162 22601 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6412/.minikube
I0229 17:53:04.140311 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Downloading /home/jenkins/minikube-integration/18259-6412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6412/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
I0229 17:53:04.352605 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:04.352498 22601 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa...
I0229 17:53:04.601899 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:04.601787 22601 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/ingress-addon-legacy-180742.rawdisk...
I0229 17:53:04.601939 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Writing magic tar header
I0229 17:53:04.601958 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Writing SSH key tar header
I0229 17:53:04.601972 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:04.601899 22601 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742 ...
I0229 17:53:04.601993 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742
I0229 17:53:04.602030 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742 (perms=drwx------)
I0229 17:53:04.602053 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube/machines (perms=drwxr-xr-x)
I0229 17:53:04.602061 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube/machines
I0229 17:53:04.602069 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412/.minikube (perms=drwxr-xr-x)
I0229 17:53:04.602076 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412/.minikube
I0229 17:53:04.602085 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6412
I0229 17:53:04.602091 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0229 17:53:04.602100 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Checking permissions on dir: /home/jenkins
I0229 17:53:04.602106 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Checking permissions on dir: /home
I0229 17:53:04.602114 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Skipping /home - not owner
I0229 17:53:04.602125 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Setting executable bit set on /home/jenkins/minikube-integration/18259-6412 (perms=drwxrwxr-x)
I0229 17:53:04.602131 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0229 17:53:04.602155 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0229 17:53:04.602184 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Creating domain...
I0229 17:53:04.603237 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) define libvirt domain using xml:
I0229 17:53:04.603255 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <domain type='kvm'>
I0229 17:53:04.603262 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <name>ingress-addon-legacy-180742</name>
I0229 17:53:04.603268 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <memory unit='MiB'>4096</memory>
I0229 17:53:04.603325 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <vcpu>2</vcpu>
I0229 17:53:04.603348 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <features>
I0229 17:53:04.603359 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <acpi/>
I0229 17:53:04.603364 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <apic/>
I0229 17:53:04.603369 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <pae/>
I0229 17:53:04.603374 22516 main.go:141] libmachine: (ingress-addon-legacy-180742)
I0229 17:53:04.603380 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) </features>
I0229 17:53:04.603386 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <cpu mode='host-passthrough'>
I0229 17:53:04.603396 22516 main.go:141] libmachine: (ingress-addon-legacy-180742)
I0229 17:53:04.603404 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) </cpu>
I0229 17:53:04.603417 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <os>
I0229 17:53:04.603425 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <type>hvm</type>
I0229 17:53:04.603451 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <boot dev='cdrom'/>
I0229 17:53:04.603466 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <boot dev='hd'/>
I0229 17:53:04.603474 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <bootmenu enable='no'/>
I0229 17:53:04.603483 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) </os>
I0229 17:53:04.603490 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <devices>
I0229 17:53:04.603498 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <disk type='file' device='cdrom'>
I0229 17:53:04.603521 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <source file='/home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/boot2docker.iso'/>
I0229 17:53:04.603535 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <target dev='hdc' bus='scsi'/>
I0229 17:53:04.603545 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <readonly/>
I0229 17:53:04.603557 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) </disk>
I0229 17:53:04.603569 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <disk type='file' device='disk'>
I0229 17:53:04.603594 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <driver name='qemu' type='raw' cache='default' io='threads' />
I0229 17:53:04.603615 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <source file='/home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/ingress-addon-legacy-180742.rawdisk'/>
I0229 17:53:04.603626 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <target dev='hda' bus='virtio'/>
I0229 17:53:04.603635 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) </disk>
I0229 17:53:04.603641 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <interface type='network'>
I0229 17:53:04.603649 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <source network='mk-ingress-addon-legacy-180742'/>
I0229 17:53:04.603655 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <model type='virtio'/>
I0229 17:53:04.603665 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) </interface>
I0229 17:53:04.603672 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <interface type='network'>
I0229 17:53:04.603679 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <source network='default'/>
I0229 17:53:04.603686 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <model type='virtio'/>
I0229 17:53:04.603694 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) </interface>
I0229 17:53:04.603700 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <serial type='pty'>
I0229 17:53:04.603711 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <target port='0'/>
I0229 17:53:04.603719 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) </serial>
I0229 17:53:04.603724 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <console type='pty'>
I0229 17:53:04.603732 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <target type='serial' port='0'/>
I0229 17:53:04.603736 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) </console>
I0229 17:53:04.603742 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <rng model='virtio'>
I0229 17:53:04.603747 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) <backend model='random'>/dev/random</backend>
I0229 17:53:04.603755 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) </rng>
I0229 17:53:04.603760 22516 main.go:141] libmachine: (ingress-addon-legacy-180742)
I0229 17:53:04.603767 22516 main.go:141] libmachine: (ingress-addon-legacy-180742)
I0229 17:53:04.603772 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) </devices>
I0229 17:53:04.603779 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) </domain>
I0229 17:53:04.603785 22516 main.go:141] libmachine: (ingress-addon-legacy-180742)
I0229 17:53:04.607811 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e3:ac:0d in network default
I0229 17:53:04.608337 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Ensuring networks are active...
I0229 17:53:04.608360 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:04.608966 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Ensuring network default is active
I0229 17:53:04.609324 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Ensuring network mk-ingress-addon-legacy-180742 is active
I0229 17:53:04.609818 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Getting domain xml...
I0229 17:53:04.610460 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Creating domain...
I0229 17:53:05.776964 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Waiting to get IP...
I0229 17:53:05.777634 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:05.777992 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
I0229 17:53:05.778032 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:05.777979 22601 retry.go:31] will retry after 264.939748ms: waiting for machine to come up
I0229 17:53:06.044475 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:06.044873 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
I0229 17:53:06.044902 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:06.044812 22601 retry.go:31] will retry after 265.069297ms: waiting for machine to come up
I0229 17:53:06.310979 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:06.311344 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
I0229 17:53:06.311368 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:06.311305 22601 retry.go:31] will retry after 467.556262ms: waiting for machine to come up
I0229 17:53:06.780770 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:06.781267 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
I0229 17:53:06.781291 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:06.781216 22601 retry.go:31] will retry after 421.595715ms: waiting for machine to come up
I0229 17:53:07.204746 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:07.205135 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
I0229 17:53:07.205160 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:07.205096 22601 retry.go:31] will retry after 532.72974ms: waiting for machine to come up
I0229 17:53:07.739784 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:07.740232 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
I0229 17:53:07.740256 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:07.740200 22601 retry.go:31] will retry after 618.789244ms: waiting for machine to come up
I0229 17:53:08.360889 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:08.361282 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
I0229 17:53:08.361307 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:08.361241 22601 retry.go:31] will retry after 789.088812ms: waiting for machine to come up
I0229 17:53:09.151658 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:09.152106 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
I0229 17:53:09.152122 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:09.152073 22601 retry.go:31] will retry after 1.087236245s: waiting for machine to come up
I0229 17:53:10.241383 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:10.241721 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
I0229 17:53:10.241763 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:10.241710 22601 retry.go:31] will retry after 1.640986162s: waiting for machine to come up
I0229 17:53:11.884465 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:11.884804 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
I0229 17:53:11.884830 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:11.884763 22601 retry.go:31] will retry after 1.591325231s: waiting for machine to come up
I0229 17:53:13.477258 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:13.477643 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
I0229 17:53:13.477678 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:13.477607 22601 retry.go:31] will retry after 2.578096176s: waiting for machine to come up
I0229 17:53:16.058742 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:16.059164 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
I0229 17:53:16.059192 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:16.059116 22601 retry.go:31] will retry after 2.779197081s: waiting for machine to come up
I0229 17:53:18.841959 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:18.842485 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
I0229 17:53:18.842515 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:18.842448 22601 retry.go:31] will retry after 3.651517306s: waiting for machine to come up
I0229 17:53:22.498334 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:22.498758 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find current IP address of domain ingress-addon-legacy-180742 in network mk-ingress-addon-legacy-180742
I0229 17:53:22.498780 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | I0229 17:53:22.498724 22601 retry.go:31] will retry after 3.9256536s: waiting for machine to come up
I0229 17:53:26.426923 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:26.427485 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Found IP for machine: 192.168.39.153
I0229 17:53:26.427510 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has current primary IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:26.427526 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Reserving static IP address...
I0229 17:53:26.427831 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-180742", mac: "52:54:00:e7:12:1e", ip: "192.168.39.153"} in network mk-ingress-addon-legacy-180742
I0229 17:53:26.495527 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Getting to WaitForSSH function...
I0229 17:53:26.495569 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Reserved static IP address: 192.168.39.153
I0229 17:53:26.495627 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Waiting for SSH to be available...
I0229 17:53:26.498107 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:26.498448 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742
I0229 17:53:26.498472 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | unable to find defined IP address of network mk-ingress-addon-legacy-180742 interface with MAC address 52:54:00:e7:12:1e
I0229 17:53:26.498661 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Using SSH client type: external
I0229 17:53:26.498689 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa (-rw-------)
I0229 17:53:26.498723 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa -p 22] /usr/bin/ssh <nil>}
I0229 17:53:26.498740 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | About to run SSH command:
I0229 17:53:26.498771 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | exit 0
I0229 17:53:26.502205 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | SSH cmd err, output: exit status 255:
I0229 17:53:26.502228 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Error getting ssh command 'exit 0' : ssh command error:
I0229 17:53:26.502249 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | command : exit 0
I0229 17:53:26.502267 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | err : exit status 255
I0229 17:53:26.502279 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | output :
I0229 17:53:29.502628 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Getting to WaitForSSH function...
I0229 17:53:29.505700 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:29.506115 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
I0229 17:53:29.506149 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:29.506307 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Using SSH client type: external
I0229 17:53:29.506339 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa (-rw-------)
I0229 17:53:29.506370 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.153 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa -p 22] /usr/bin/ssh <nil>}
I0229 17:53:29.506392 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | About to run SSH command:
I0229 17:53:29.506419 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | exit 0
I0229 17:53:29.630898 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | SSH cmd err, output: <nil>:
I0229 17:53:29.631119 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) KVM machine creation complete!
I0229 17:53:29.631458 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetConfigRaw
I0229 17:53:29.632013 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
I0229 17:53:29.632178 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
I0229 17:53:29.632346 22516 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0229 17:53:29.632360 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetState
I0229 17:53:29.633464 22516 main.go:141] libmachine: Detecting operating system of created instance...
I0229 17:53:29.633477 22516 main.go:141] libmachine: Waiting for SSH to be available...
I0229 17:53:29.633482 22516 main.go:141] libmachine: Getting to WaitForSSH function...
I0229 17:53:29.633488 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
I0229 17:53:29.635606 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:29.635914 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
I0229 17:53:29.635940 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:29.636061 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
I0229 17:53:29.636220 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
I0229 17:53:29.636368 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
I0229 17:53:29.636515 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
I0229 17:53:29.636704 22516 main.go:141] libmachine: Using SSH client type: native
I0229 17:53:29.636931 22516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.39.153 22 <nil> <nil>}
I0229 17:53:29.636946 22516 main.go:141] libmachine: About to run SSH command:
exit 0
I0229 17:53:29.746075 22516 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0229 17:53:29.746097 22516 main.go:141] libmachine: Detecting the provisioner...
I0229 17:53:29.746108 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
I0229 17:53:29.748636 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:29.748959 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
I0229 17:53:29.748990 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:29.749123 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
I0229 17:53:29.749306 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
I0229 17:53:29.749442 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
I0229 17:53:29.749565 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
I0229 17:53:29.749688 22516 main.go:141] libmachine: Using SSH client type: native
I0229 17:53:29.749850 22516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.39.153 22 <nil> <nil>}
I0229 17:53:29.749864 22516 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0229 17:53:29.859893 22516 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0229 17:53:29.859960 22516 main.go:141] libmachine: found compatible host: buildroot
I0229 17:53:29.859971 22516 main.go:141] libmachine: Provisioning with buildroot...
I0229 17:53:29.859980 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetMachineName
I0229 17:53:29.860270 22516 buildroot.go:166] provisioning hostname "ingress-addon-legacy-180742"
I0229 17:53:29.860300 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetMachineName
I0229 17:53:29.860507 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
I0229 17:53:29.862886 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:29.863200 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
I0229 17:53:29.863234 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:29.863343 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
I0229 17:53:29.863523 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
I0229 17:53:29.863634 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
I0229 17:53:29.863763 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
I0229 17:53:29.863928 22516 main.go:141] libmachine: Using SSH client type: native
I0229 17:53:29.864136 22516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.39.153 22 <nil> <nil>}
I0229 17:53:29.864154 22516 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-180742 && echo "ingress-addon-legacy-180742" | sudo tee /etc/hostname
I0229 17:53:29.985841 22516 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-180742
I0229 17:53:29.985870 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
I0229 17:53:29.988295 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:29.988619 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
I0229 17:53:29.988654 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:29.988788 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
I0229 17:53:29.988984 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
I0229 17:53:29.989143 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
I0229 17:53:29.989262 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
I0229 17:53:29.989433 22516 main.go:141] libmachine: Using SSH client type: native
I0229 17:53:29.989603 22516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.39.153 22 <nil> <nil>}
I0229 17:53:29.989629 22516 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-180742' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-180742/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-180742' | sudo tee -a /etc/hosts;
fi
fi
I0229 17:53:30.104093 22516 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0229 17:53:30.104117 22516 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6412/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6412/.minikube}
I0229 17:53:30.104137 22516 buildroot.go:174] setting up certificates
I0229 17:53:30.104146 22516 provision.go:83] configureAuth start
I0229 17:53:30.104154 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetMachineName
I0229 17:53:30.104397 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetIP
I0229 17:53:30.106621 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.106955 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
I0229 17:53:30.106989 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.107088 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
I0229 17:53:30.109165 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.109456 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
I0229 17:53:30.109482 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.109615 22516 provision.go:138] copyHostCerts
I0229 17:53:30.109647 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem
I0229 17:53:30.109675 22516 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem, removing ...
I0229 17:53:30.109695 22516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem
I0229 17:53:30.109756 22516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/ca.pem (1078 bytes)
I0229 17:53:30.109827 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem
I0229 17:53:30.109844 22516 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem, removing ...
I0229 17:53:30.109851 22516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem
I0229 17:53:30.109873 22516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/cert.pem (1123 bytes)
I0229 17:53:30.109916 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem
I0229 17:53:30.109933 22516 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem, removing ...
I0229 17:53:30.109939 22516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem
I0229 17:53:30.109959 22516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6412/.minikube/key.pem (1675 bytes)
I0229 17:53:30.110002 22516 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-180742 san=[192.168.39.153 192.168.39.153 localhost 127.0.0.1 minikube ingress-addon-legacy-180742]
I0229 17:53:30.474647 22516 provision.go:172] copyRemoteCerts
I0229 17:53:30.474701 22516 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0229 17:53:30.474724 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
I0229 17:53:30.476954 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.477285 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
I0229 17:53:30.477313 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.477523 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
I0229 17:53:30.477704 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
I0229 17:53:30.477892 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
I0229 17:53:30.478025 22516 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa Username:docker}
I0229 17:53:30.562275 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0229 17:53:30.562337 22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0229 17:53:30.588008 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem -> /etc/docker/server.pem
I0229 17:53:30.588069 22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
I0229 17:53:30.613000 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0229 17:53:30.613052 22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0229 17:53:30.638094 22516 provision.go:86] duration metric: configureAuth took 533.938114ms
I0229 17:53:30.638116 22516 buildroot.go:189] setting minikube options for container-runtime
I0229 17:53:30.638290 22516 config.go:182] Loaded profile config "ingress-addon-legacy-180742": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
I0229 17:53:30.638311 22516 main.go:141] libmachine: Checking connection to Docker...
I0229 17:53:30.638322 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetURL
I0229 17:53:30.639418 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | Using libvirt version 6000000
I0229 17:53:30.641623 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.641917 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
I0229 17:53:30.641939 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.642123 22516 main.go:141] libmachine: Docker is up and running!
I0229 17:53:30.642137 22516 main.go:141] libmachine: Reticulating splines...
I0229 17:53:30.642144 22516 client.go:171] LocalClient.Create took 26.571253433s
I0229 17:53:30.642171 22516 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-180742" took 26.571317685s
I0229 17:53:30.642183 22516 start.go:300] post-start starting for "ingress-addon-legacy-180742" (driver="kvm2")
I0229 17:53:30.642201 22516 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0229 17:53:30.642229 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
I0229 17:53:30.642459 22516 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0229 17:53:30.642480 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
I0229 17:53:30.644553 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.644911 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
I0229 17:53:30.644942 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.645073 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
I0229 17:53:30.645224 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
I0229 17:53:30.645382 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
I0229 17:53:30.645486 22516 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa Username:docker}
I0229 17:53:30.730727 22516 ssh_runner.go:195] Run: cat /etc/os-release
I0229 17:53:30.735592 22516 info.go:137] Remote host: Buildroot 2023.02.9
I0229 17:53:30.735611 22516 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/addons for local assets ...
I0229 17:53:30.735664 22516 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6412/.minikube/files for local assets ...
I0229 17:53:30.735742 22516 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem -> 137212.pem in /etc/ssl/certs
I0229 17:53:30.735753 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem -> /etc/ssl/certs/137212.pem
I0229 17:53:30.735841 22516 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0229 17:53:30.747992 22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /etc/ssl/certs/137212.pem (1708 bytes)
I0229 17:53:30.774002 22516 start.go:303] post-start completed in 131.804098ms
I0229 17:53:30.774041 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetConfigRaw
I0229 17:53:30.774577 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetIP
I0229 17:53:30.777022 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.777351 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
I0229 17:53:30.777381 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.777573 22516 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/config.json ...
I0229 17:53:30.777734 22516 start.go:128] duration metric: createHost completed in 26.726064734s
I0229 17:53:30.777753 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
I0229 17:53:30.779636 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.779904 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
I0229 17:53:30.779929 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.780066 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
I0229 17:53:30.780211 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
I0229 17:53:30.780365 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
I0229 17:53:30.780495 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
I0229 17:53:30.780656 22516 main.go:141] libmachine: Using SSH client type: native
I0229 17:53:30.780816 22516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.39.153 22 <nil> <nil>}
I0229 17:53:30.780826 22516 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0229 17:53:30.887609 22516 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709229210.861668051
I0229 17:53:30.887628 22516 fix.go:206] guest clock: 1709229210.861668051
I0229 17:53:30.887634 22516 fix.go:219] Guest: 2024-02-29 17:53:30.861668051 +0000 UTC Remote: 2024-02-29 17:53:30.777744277 +0000 UTC m=+51.186873393 (delta=83.923774ms)
I0229 17:53:30.887652 22516 fix.go:190] guest clock delta is within tolerance: 83.923774ms
I0229 17:53:30.887665 22516 start.go:83] releasing machines lock for "ingress-addon-legacy-180742", held for 26.836086747s
I0229 17:53:30.887683 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
I0229 17:53:30.887920 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetIP
I0229 17:53:30.890493 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.890815 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
I0229 17:53:30.890830 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.890963 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
I0229 17:53:30.891420 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
I0229 17:53:30.891591 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .DriverName
I0229 17:53:30.891655 22516 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0229 17:53:30.891698 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
I0229 17:53:30.891831 22516 ssh_runner.go:195] Run: cat /version.json
I0229 17:53:30.891856 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHHostname
I0229 17:53:30.894202 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.894273 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.894582 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
I0229 17:53:30.894607 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.894637 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
I0229 17:53:30.894664 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:30.894739 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
I0229 17:53:30.894889 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHPort
I0229 17:53:30.894915 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
I0229 17:53:30.895053 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHKeyPath
I0229 17:53:30.895085 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
I0229 17:53:30.895153 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetSSHUsername
I0229 17:53:30.895217 22516 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa Username:docker}
I0229 17:53:30.895299 22516 sshutil.go:53] new ssh client: &{IP:192.168.39.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6412/.minikube/machines/ingress-addon-legacy-180742/id_rsa Username:docker}
I0229 17:53:30.999049 22516 ssh_runner.go:195] Run: systemctl --version
I0229 17:53:31.005852 22516 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0229 17:53:31.012144 22516 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0229 17:53:31.012219 22516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0229 17:53:31.029685 22516 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0229 17:53:31.029702 22516 start.go:475] detecting cgroup driver to use...
I0229 17:53:31.029777 22516 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0229 17:53:31.067259 22516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0229 17:53:31.082082 22516 docker.go:217] disabling cri-docker service (if available) ...
I0229 17:53:31.082153 22516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0229 17:53:31.096972 22516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0229 17:53:31.112291 22516 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0229 17:53:31.230375 22516 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0229 17:53:31.372170 22516 docker.go:233] disabling docker service ...
I0229 17:53:31.372230 22516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0229 17:53:31.388297 22516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0229 17:53:31.401433 22516 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0229 17:53:31.535521 22516 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0229 17:53:31.646072 22516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0229 17:53:31.661978 22516 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0229 17:53:31.682230 22516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0229 17:53:31.693153 22516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0229 17:53:31.703827 22516 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0229 17:53:31.703876 22516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0229 17:53:31.714461 22516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0229 17:53:31.725115 22516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0229 17:53:31.735600 22516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0229 17:53:31.746476 22516 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0229 17:53:31.757382 22516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0229 17:53:31.768023 22516 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0229 17:53:31.777601 22516 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0229 17:53:31.777641 22516 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0229 17:53:31.791791 22516 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0229 17:53:31.802929 22516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0229 17:53:31.914961 22516 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0229 17:53:31.945264 22516 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
I0229 17:53:31.945353 22516 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0229 17:53:31.950161 22516 retry.go:31] will retry after 688.012804ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0229 17:53:32.639170 22516 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0229 17:53:32.645083 22516 start.go:543] Will wait 60s for crictl version
I0229 17:53:32.645147 22516 ssh_runner.go:195] Run: which crictl
I0229 17:53:32.649303 22516 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0229 17:53:32.684369 22516 start.go:559] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.11
RuntimeApiVersion: v1
I0229 17:53:32.684435 22516 ssh_runner.go:195] Run: containerd --version
I0229 17:53:32.712421 22516 ssh_runner.go:195] Run: containerd --version
I0229 17:53:32.741751 22516 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.7.11 ...
I0229 17:53:32.743063 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) Calling .GetIP
I0229 17:53:32.745366 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:32.745706 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:12:1e", ip: ""} in network mk-ingress-addon-legacy-180742: {Iface:virbr1 ExpiryTime:2024-02-29 18:53:19 +0000 UTC Type:0 Mac:52:54:00:e7:12:1e Iaid: IPaddr:192.168.39.153 Prefix:24 Hostname:ingress-addon-legacy-180742 Clientid:01:52:54:00:e7:12:1e}
I0229 17:53:32.745735 22516 main.go:141] libmachine: (ingress-addon-legacy-180742) DBG | domain ingress-addon-legacy-180742 has defined IP address 192.168.39.153 and MAC address 52:54:00:e7:12:1e in network mk-ingress-addon-legacy-180742
I0229 17:53:32.745896 22516 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0229 17:53:32.750397 22516 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0229 17:53:32.763828 22516 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
I0229 17:53:32.763886 22516 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 17:53:32.800132 22516 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
I0229 17:53:32.800210 22516 ssh_runner.go:195] Run: which lz4
I0229 17:53:32.805142 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0229 17:53:32.805247 22516 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0229 17:53:32.809668 22516 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0229 17:53:32.809699 22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (494845061 bytes)
I0229 17:53:34.632740 22516 containerd.go:548] Took 1.827522 seconds to copy over tarball
I0229 17:53:34.632818 22516 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0229 17:53:37.357730 22516 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.724878051s)
I0229 17:53:37.357766 22516 containerd.go:555] Took 2.725004 seconds to extract the tarball
I0229 17:53:37.357781 22516 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0229 17:53:37.404312 22516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0229 17:53:37.521269 22516 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0229 17:53:37.553529 22516 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 17:53:37.591056 22516 retry.go:31] will retry after 221.455187ms: sudo crictl images --output json: Process exited with status 1
stdout:
stderr:
time="2024-02-29T17:53:37Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
I0229 17:53:37.813513 22516 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 17:53:37.853650 22516 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
I0229 17:53:37.853686 22516 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
I0229 17:53:37.853732 22516 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0229 17:53:37.853771 22516 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
I0229 17:53:37.853782 22516 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
I0229 17:53:37.853831 22516 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
I0229 17:53:37.853845 22516 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I0229 17:53:37.853903 22516 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
I0229 17:53:37.853835 22516 image.go:134] retrieving image: registry.k8s.io/pause:3.2
I0229 17:53:37.854016 22516 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
I0229 17:53:37.855170 22516 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
I0229 17:53:37.855177 22516 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
I0229 17:53:37.855184 22516 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
I0229 17:53:37.855198 22516 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0229 17:53:37.855237 22516 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
I0229 17:53:37.855320 22516 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I0229 17:53:37.855403 22516 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
I0229 17:53:37.855438 22516 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
I0229 17:53:38.058782 22516 containerd.go:252] Checking existence of image with name "registry.k8s.io/pause:3.2" and sha "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c"
I0229 17:53:38.058846 22516 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0229 17:53:38.084033 22516 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.18.20" and sha "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346"
I0229 17:53:38.084097 22516 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0229 17:53:38.194683 22516 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.18.20" and sha "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1"
I0229 17:53:38.194758 22516 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0229 17:53:38.204893 22516 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.18.20" and sha "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba"
I0229 17:53:38.204968 22516 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0229 17:53:38.214621 22516 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.18.20" and sha "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290"
I0229 17:53:38.214688 22516 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0229 17:53:38.237990 22516 containerd.go:252] Checking existence of image with name "registry.k8s.io/coredns:1.6.7" and sha "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5"
I0229 17:53:38.238073 22516 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0229 17:53:38.243755 22516 containerd.go:252] Checking existence of image with name "registry.k8s.io/etcd:3.4.3-0" and sha "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f"
I0229 17:53:38.243821 22516 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0229 17:53:38.357070 22516 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0229 17:53:38.357106 22516 cri.go:218] Removing image: registry.k8s.io/pause:3.2
I0229 17:53:38.357157 22516 ssh_runner.go:195] Run: which crictl
I0229 17:53:38.590238 22516 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
I0229 17:53:38.590285 22516 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
I0229 17:53:38.590332 22516 ssh_runner.go:195] Run: which crictl
I0229 17:53:39.214141 22516 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.019365306s)
I0229 17:53:39.214193 22516 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
I0229 17:53:39.214220 22516 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
I0229 17:53:39.214267 22516 ssh_runner.go:195] Run: which crictl
I0229 17:53:39.214752 22516 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.009765173s)
I0229 17:53:39.214810 22516 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
I0229 17:53:39.214837 22516 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
I0229 17:53:39.214877 22516 ssh_runner.go:195] Run: which crictl
I0229 17:53:39.246335 22516 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.0316193s)
I0229 17:53:39.246400 22516 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
I0229 17:53:39.246445 22516 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
I0229 17:53:39.246494 22516 ssh_runner.go:195] Run: which crictl
I0229 17:53:39.246824 22516 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.008721893s)
I0229 17:53:39.246865 22516 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
I0229 17:53:39.246885 22516 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
I0229 17:53:39.246922 22516 ssh_runner.go:195] Run: which crictl
I0229 17:53:39.247262 22516 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.003422998s)
I0229 17:53:39.247311 22516 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
I0229 17:53:39.247328 22516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
I0229 17:53:39.247342 22516 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
I0229 17:53:39.247366 22516 ssh_runner.go:195] Run: which crictl
I0229 17:53:39.247367 22516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
I0229 17:53:39.247414 22516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
I0229 17:53:39.247429 22516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
I0229 17:53:39.251235 22516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
I0229 17:53:39.260307 22516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
I0229 17:53:39.389506 22516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
I0229 17:53:39.389521 22516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
I0229 17:53:39.389567 22516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
I0229 17:53:39.389623 22516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
I0229 17:53:39.389643 22516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
I0229 17:53:39.389684 22516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
I0229 17:53:39.389742 22516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
I0229 17:53:39.424702 22516 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I0229 17:53:39.789810 22516 containerd.go:252] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
I0229 17:53:39.789864 22516 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0229 17:53:39.984139 22516 cache_images.go:92] LoadImages completed in 2.130435683s
W0229 17:53:39.984226 22516 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
I0229 17:53:39.984276 22516 ssh_runner.go:195] Run: sudo crictl info
I0229 17:53:40.021613 22516 cni.go:84] Creating CNI manager for ""
I0229 17:53:40.021637 22516 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0229 17:53:40.021651 22516 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0229 17:53:40.021688 22516 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.153 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-180742 NodeName:ingress-addon-legacy-180742 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.153"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.153 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minik
ube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0229 17:53:40.021849 22516 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.153
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "ingress-addon-legacy-180742"
kubeletExtraArgs:
node-ip: 192.168.39.153
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.153"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0229 17:53:40.021935 22516 kubeadm.go:976] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-180742 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.153
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-180742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0229 17:53:40.021987 22516 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0229 17:53:40.032773 22516 binaries.go:44] Found k8s binaries, skipping transfer
I0229 17:53:40.032841 22516 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0229 17:53:40.043086 22516 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
I0229 17:53:40.061645 22516 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0229 17:53:40.079895 22516 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2137 bytes)
I0229 17:53:40.097693 22516 ssh_runner.go:195] Run: grep 192.168.39.153 control-plane.minikube.internal$ /etc/hosts
I0229 17:53:40.101928 22516 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.153 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0229 17:53:40.115363 22516 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742 for IP: 192.168.39.153
I0229 17:53:40.115390 22516 certs.go:190] acquiring lock for shared ca certs: {Name:mk2dadf741a26f46fb193aefceea30d228c16c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 17:53:40.115541 22516 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key
I0229 17:53:40.115593 22516 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key
I0229 17:53:40.115649 22516 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/client.key
I0229 17:53:40.115676 22516 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/client.crt with IP's: []
I0229 17:53:40.283545 22516 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/client.crt ...
I0229 17:53:40.283577 22516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/client.crt: {Name:mk35a83d8d385ec160686cf1ec74716b8a23de49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 17:53:40.283767 22516 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/client.key ...
I0229 17:53:40.283783 22516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/client.key: {Name:mk8674d5d9bb0261a5ad50a34db3ee19436bf1e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 17:53:40.283889 22516 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.key.9df834dd
I0229 17:53:40.283908 22516 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.crt.9df834dd with IP's: [192.168.39.153 10.96.0.1 127.0.0.1 10.0.0.1]
I0229 17:53:40.785142 22516 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.crt.9df834dd ...
I0229 17:53:40.785174 22516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.crt.9df834dd: {Name:mka38470ed0efd8cfe51c8a14236dbbac9952717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 17:53:40.785351 22516 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.key.9df834dd ...
I0229 17:53:40.785368 22516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.key.9df834dd: {Name:mka1e7b8fd9707f1fa16d6add705e2b0c401d463 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 17:53:40.785467 22516 certs.go:337] copying /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.crt.9df834dd -> /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.crt
I0229 17:53:40.785572 22516 certs.go:341] copying /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.key.9df834dd -> /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.key
I0229 17:53:40.785659 22516 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.key
I0229 17:53:40.785679 22516 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.crt with IP's: []
I0229 17:53:40.967870 22516 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.crt ...
I0229 17:53:40.967902 22516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.crt: {Name:mk6ab76f4fa1fe99f982bfe1389c2c74b27d9f3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 17:53:40.968073 22516 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.key ...
I0229 17:53:40.968093 22516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.key: {Name:mk34aed281d82d8c6879fefc48888497c0319847 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 17:53:40.968248 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0229 17:53:40.968272 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0229 17:53:40.968287 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0229 17:53:40.968301 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0229 17:53:40.968321 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0229 17:53:40.968337 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0229 17:53:40.968352 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0229 17:53:40.968371 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0229 17:53:40.968441 22516 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem (1338 bytes)
W0229 17:53:40.968492 22516 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721_empty.pem, impossibly tiny 0 bytes
I0229 17:53:40.968507 22516 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca-key.pem (1675 bytes)
I0229 17:53:40.968554 22516 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/ca.pem (1078 bytes)
I0229 17:53:40.968587 22516 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/cert.pem (1123 bytes)
I0229 17:53:40.968626 22516 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/home/jenkins/minikube-integration/18259-6412/.minikube/certs/key.pem (1675 bytes)
I0229 17:53:40.968679 22516 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem (1708 bytes)
I0229 17:53:40.968728 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem -> /usr/share/ca-certificates/137212.pem
I0229 17:53:40.968750 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0229 17:53:40.968768 22516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem -> /usr/share/ca-certificates/13721.pem
I0229 17:53:40.969372 22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0229 17:53:40.996861 22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0229 17:53:41.022226 22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0229 17:53:41.048596 22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/profiles/ingress-addon-legacy-180742/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0229 17:53:41.074303 22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0229 17:53:41.099724 22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0229 17:53:41.125524 22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0229 17:53:41.151404 22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0229 17:53:41.177029 22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/files/etc/ssl/certs/137212.pem --> /usr/share/ca-certificates/137212.pem (1708 bytes)
I0229 17:53:41.202706 22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0229 17:53:41.228110 22516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6412/.minikube/certs/13721.pem --> /usr/share/ca-certificates/13721.pem (1338 bytes)
I0229 17:53:41.253439 22516 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0229 17:53:41.271136 22516 ssh_runner.go:195] Run: openssl version
I0229 17:53:41.277332 22516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13721.pem && ln -fs /usr/share/ca-certificates/13721.pem /etc/ssl/certs/13721.pem"
I0229 17:53:41.288560 22516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13721.pem
I0229 17:53:41.293511 22516 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:48 /usr/share/ca-certificates/13721.pem
I0229 17:53:41.293560 22516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13721.pem
I0229 17:53:41.299702 22516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13721.pem /etc/ssl/certs/51391683.0"
I0229 17:53:41.310896 22516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/137212.pem && ln -fs /usr/share/ca-certificates/137212.pem /etc/ssl/certs/137212.pem"
I0229 17:53:41.322034 22516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/137212.pem
I0229 17:53:41.330042 22516 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:48 /usr/share/ca-certificates/137212.pem
I0229 17:53:41.330079 22516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/137212.pem
I0229 17:53:41.336052 22516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/137212.pem /etc/ssl/certs/3ec20f2e.0"
I0229 17:53:41.347018 22516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0229 17:53:41.357995 22516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0229 17:53:41.362852 22516 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
I0229 17:53:41.362885 22516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0229 17:53:41.368676 22516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0229 17:53:41.379612 22516 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0229 17:53:41.384529 22516 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0229 17:53:41.384585 22516 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-180742 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-180742 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.153 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0229 17:53:41.384671 22516 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0229 17:53:41.384742 22516 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0229 17:53:41.423715 22516 cri.go:89] found id: ""
I0229 17:53:41.423804 22516 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0229 17:53:41.434425 22516 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0229 17:53:41.445320 22516 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0229 17:53:41.455176 22516 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0229 17:53:41.455212 22516 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
I0229 17:53:41.516228 22516 kubeadm.go:322] W0229 17:53:41.501196 836 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0229 17:53:41.648416 22516 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0229 17:53:44.397997 22516 kubeadm.go:322] W0229 17:53:44.384900 836 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0229 17:53:44.399237 22516 kubeadm.go:322] W0229 17:53:44.386136 836 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0229 17:55:39.399521 22516 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0229 17:55:39.399622 22516 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0229 17:55:39.400981 22516 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0229 17:55:39.401076 22516 kubeadm.go:322] [preflight] Running pre-flight checks
I0229 17:55:39.401151 22516 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0229 17:55:39.401243 22516 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0229 17:55:39.401361 22516 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0229 17:55:39.401485 22516 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0229 17:55:39.401582 22516 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0229 17:55:39.401626 22516 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0229 17:55:39.401688 22516 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0229 17:55:39.403371 22516 out.go:204] - Generating certificates and keys ...
I0229 17:55:39.403444 22516 kubeadm.go:322] [certs] Using existing ca certificate authority
I0229 17:55:39.403506 22516 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0229 17:55:39.403573 22516 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0229 17:55:39.403658 22516 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0229 17:55:39.403745 22516 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0229 17:55:39.403835 22516 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0229 17:55:39.403915 22516 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0229 17:55:39.404061 22516 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-180742 localhost] and IPs [192.168.39.153 127.0.0.1 ::1]
I0229 17:55:39.404110 22516 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0229 17:55:39.404222 22516 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-180742 localhost] and IPs [192.168.39.153 127.0.0.1 ::1]
I0229 17:55:39.404296 22516 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0229 17:55:39.404379 22516 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0229 17:55:39.404420 22516 kubeadm.go:322] [certs] Generating "sa" key and public key
I0229 17:55:39.404468 22516 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0229 17:55:39.404515 22516 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0229 17:55:39.404563 22516 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0229 17:55:39.404617 22516 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0229 17:55:39.404664 22516 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0229 17:55:39.404727 22516 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0229 17:55:39.406038 22516 out.go:204] - Booting up control plane ...
I0229 17:55:39.406129 22516 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0229 17:55:39.406214 22516 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0229 17:55:39.406275 22516 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0229 17:55:39.406344 22516 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0229 17:55:39.406474 22516 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0229 17:55:39.406520 22516 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0229 17:55:39.406596 22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 17:55:39.406769 22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 17:55:39.406874 22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 17:55:39.407072 22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 17:55:39.407144 22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 17:55:39.407324 22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 17:55:39.407405 22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 17:55:39.407594 22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 17:55:39.407655 22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 17:55:39.407824 22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 17:55:39.407844 22516 kubeadm.go:322]
I0229 17:55:39.407905 22516 kubeadm.go:322] Unfortunately, an error has occurred:
I0229 17:55:39.407955 22516 kubeadm.go:322] timed out waiting for the condition
I0229 17:55:39.407963 22516 kubeadm.go:322]
I0229 17:55:39.407991 22516 kubeadm.go:322] This error is likely caused by:
I0229 17:55:39.408021 22516 kubeadm.go:322] - The kubelet is not running
I0229 17:55:39.408110 22516 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0229 17:55:39.408117 22516 kubeadm.go:322]
I0229 17:55:39.408207 22516 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0229 17:55:39.408242 22516 kubeadm.go:322] - 'systemctl status kubelet'
I0229 17:55:39.408271 22516 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0229 17:55:39.408277 22516 kubeadm.go:322]
I0229 17:55:39.408397 22516 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0229 17:55:39.408506 22516 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0229 17:55:39.408525 22516 kubeadm.go:322]
I0229 17:55:39.408651 22516 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
I0229 17:55:39.408779 22516 kubeadm.go:322] - 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
I0229 17:55:39.408888 22516 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0229 17:55:39.408985 22516 kubeadm.go:322] - 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
I0229 17:55:39.409027 22516 kubeadm.go:322]
W0229 17:55:39.409102 22516 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-180742 localhost] and IPs [192.168.39.153 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-180742 localhost] and IPs [192.168.39.153 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0229 17:53:41.501196 836 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 17:53:44.384900 836 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 17:53:44.386136 836 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-180742 localhost] and IPs [192.168.39.153 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-180742 localhost] and IPs [192.168.39.153 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0229 17:53:41.501196 836 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 17:53:44.384900 836 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 17:53:44.386136 836 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0229 17:55:39.409144 22516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0229 17:55:39.898425 22516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0229 17:55:39.913950 22516 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0229 17:55:39.924440 22516 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0229 17:55:39.924480 22516 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
I0229 17:55:39.986793 22516 kubeadm.go:322] W0229 17:55:39.980404 3535 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0229 17:55:40.124210 22516 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0229 17:55:41.051999 22516 kubeadm.go:322] W0229 17:55:41.045820 3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0229 17:55:41.053452 22516 kubeadm.go:322] W0229 17:55:41.047318 3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0229 17:57:36.062335 22516 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0229 17:57:36.062481 22516 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0229 17:57:36.063939 22516 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0229 17:57:36.064012 22516 kubeadm.go:322] [preflight] Running pre-flight checks
I0229 17:57:36.064124 22516 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0229 17:57:36.064262 22516 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0229 17:57:36.064399 22516 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0229 17:57:36.064530 22516 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0229 17:57:36.064639 22516 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0229 17:57:36.064705 22516 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0229 17:57:36.064799 22516 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0229 17:57:36.066659 22516 out.go:204] - Generating certificates and keys ...
I0229 17:57:36.066741 22516 kubeadm.go:322] [certs] Using existing ca certificate authority
I0229 17:57:36.066830 22516 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0229 17:57:36.066922 22516 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0229 17:57:36.066979 22516 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0229 17:57:36.067044 22516 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0229 17:57:36.067089 22516 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0229 17:57:36.067148 22516 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0229 17:57:36.067238 22516 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0229 17:57:36.067346 22516 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0229 17:57:36.067443 22516 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0229 17:57:36.067499 22516 kubeadm.go:322] [certs] Using the existing "sa" key
I0229 17:57:36.067579 22516 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0229 17:57:36.067651 22516 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0229 17:57:36.067714 22516 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0229 17:57:36.067768 22516 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0229 17:57:36.067814 22516 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0229 17:57:36.067868 22516 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0229 17:57:36.069742 22516 out.go:204] - Booting up control plane ...
I0229 17:57:36.069810 22516 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0229 17:57:36.069873 22516 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0229 17:57:36.069944 22516 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0229 17:57:36.070033 22516 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0229 17:57:36.070169 22516 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0229 17:57:36.070217 22516 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0229 17:57:36.070288 22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 17:57:36.070486 22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 17:57:36.070621 22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 17:57:36.070808 22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 17:57:36.070874 22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 17:57:36.071027 22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 17:57:36.071086 22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 17:57:36.071238 22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 17:57:36.071301 22516 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 17:57:36.071460 22516 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 17:57:36.071474 22516 kubeadm.go:322]
I0229 17:57:36.071534 22516 kubeadm.go:322] Unfortunately, an error has occurred:
I0229 17:57:36.071592 22516 kubeadm.go:322] timed out waiting for the condition
I0229 17:57:36.071602 22516 kubeadm.go:322]
I0229 17:57:36.071656 22516 kubeadm.go:322] This error is likely caused by:
I0229 17:57:36.071712 22516 kubeadm.go:322] - The kubelet is not running
I0229 17:57:36.071820 22516 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0229 17:57:36.071828 22516 kubeadm.go:322]
I0229 17:57:36.071929 22516 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0229 17:57:36.071978 22516 kubeadm.go:322] - 'systemctl status kubelet'
I0229 17:57:36.072026 22516 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0229 17:57:36.072035 22516 kubeadm.go:322]
I0229 17:57:36.072146 22516 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0229 17:57:36.072241 22516 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0229 17:57:36.072249 22516 kubeadm.go:322]
I0229 17:57:36.072340 22516 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
I0229 17:57:36.072426 22516 kubeadm.go:322] - 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
I0229 17:57:36.072489 22516 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0229 17:57:36.072555 22516 kubeadm.go:322] - 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
I0229 17:57:36.072601 22516 kubeadm.go:322]
I0229 17:57:36.072608 22516 kubeadm.go:406] StartCluster complete in 3m54.688031218s
I0229 17:57:36.072639 22516 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0229 17:57:36.072695 22516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0229 17:57:36.119684 22516 cri.go:89] found id: ""
I0229 17:57:36.119708 22516 logs.go:276] 0 containers: []
W0229 17:57:36.119717 22516 logs.go:278] No container was found matching "kube-apiserver"
I0229 17:57:36.119724 22516 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0229 17:57:36.119783 22516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0229 17:57:36.165725 22516 cri.go:89] found id: ""
I0229 17:57:36.165748 22516 logs.go:276] 0 containers: []
W0229 17:57:36.165758 22516 logs.go:278] No container was found matching "etcd"
I0229 17:57:36.165766 22516 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0229 17:57:36.165821 22516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0229 17:57:36.216132 22516 cri.go:89] found id: ""
I0229 17:57:36.216161 22516 logs.go:276] 0 containers: []
W0229 17:57:36.216172 22516 logs.go:278] No container was found matching "coredns"
I0229 17:57:36.216179 22516 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0229 17:57:36.216240 22516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0229 17:57:36.266689 22516 cri.go:89] found id: ""
I0229 17:57:36.266717 22516 logs.go:276] 0 containers: []
W0229 17:57:36.266727 22516 logs.go:278] No container was found matching "kube-scheduler"
I0229 17:57:36.266734 22516 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0229 17:57:36.266800 22516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0229 17:57:36.308867 22516 cri.go:89] found id: ""
I0229 17:57:36.308891 22516 logs.go:276] 0 containers: []
W0229 17:57:36.308898 22516 logs.go:278] No container was found matching "kube-proxy"
I0229 17:57:36.308903 22516 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0229 17:57:36.308948 22516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0229 17:57:36.346038 22516 cri.go:89] found id: ""
I0229 17:57:36.346064 22516 logs.go:276] 0 containers: []
W0229 17:57:36.346073 22516 logs.go:278] No container was found matching "kube-controller-manager"
I0229 17:57:36.346080 22516 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0229 17:57:36.346149 22516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0229 17:57:36.383537 22516 cri.go:89] found id: ""
I0229 17:57:36.383564 22516 logs.go:276] 0 containers: []
W0229 17:57:36.383571 22516 logs.go:278] No container was found matching "kindnet"
I0229 17:57:36.383580 22516 logs.go:123] Gathering logs for kubelet ...
I0229 17:57:36.383592 22516 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0229 17:57:36.411142 22516 logs.go:138] Found kubelet problem: Feb 29 17:57:27 ingress-addon-legacy-180742 kubelet[6000]: F0229 17:57:27.972664 6000 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 17:57:36.416761 22516 logs.go:138] Found kubelet problem: Feb 29 17:57:29 ingress-addon-legacy-180742 kubelet[6024]: F0229 17:57:29.256416 6024 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 17:57:36.421811 22516 logs.go:138] Found kubelet problem: Feb 29 17:57:30 ingress-addon-legacy-180742 kubelet[6050]: F0229 17:57:30.511750 6050 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 17:57:36.426624 22516 logs.go:138] Found kubelet problem: Feb 29 17:57:31 ingress-addon-legacy-180742 kubelet[6076]: F0229 17:57:31.748210 6076 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 17:57:36.431407 22516 logs.go:138] Found kubelet problem: Feb 29 17:57:33 ingress-addon-legacy-180742 kubelet[6099]: F0229 17:57:33.005862 6099 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 17:57:36.436197 22516 logs.go:138] Found kubelet problem: Feb 29 17:57:34 ingress-addon-legacy-180742 kubelet[6124]: F0229 17:57:34.247151 6124 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 17:57:36.441013 22516 logs.go:138] Found kubelet problem: Feb 29 17:57:35 ingress-addon-legacy-180742 kubelet[6148]: F0229 17:57:35.485727 6148 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
I0229 17:57:36.443629 22516 logs.go:123] Gathering logs for dmesg ...
I0229 17:57:36.443644 22516 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0229 17:57:36.458261 22516 logs.go:123] Gathering logs for describe nodes ...
I0229 17:57:36.458282 22516 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0229 17:57:36.526123 22516 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I0229 17:57:36.526143 22516 logs.go:123] Gathering logs for containerd ...
I0229 17:57:36.526160 22516 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0229 17:57:36.567189 22516 logs.go:123] Gathering logs for container status ...
I0229 17:57:36.567225 22516 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0229 17:57:36.638180 22516 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0229 17:55:39.980404 3535 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 17:55:41.045820 3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 17:55:41.047318 3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0229 17:57:36.638233 22516 out.go:239] *
*
W0229 17:57:36.638317 22516 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0229 17:55:39.980404 3535 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 17:55:41.045820 3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 17:55:41.047318 3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0229 17:55:39.980404 3535 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 17:55:41.045820 3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 17:55:41.047318 3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0229 17:57:36.638342 22516 out.go:239] *
*
W0229 17:57:36.639322 22516 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0229 17:57:36.641894 22516 out.go:177] X Problems detected in kubelet:
I0229 17:57:36.643825 22516 out.go:177] Feb 29 17:57:27 ingress-addon-legacy-180742 kubelet[6000]: F0229 17:57:27.972664 6000 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
I0229 17:57:36.645243 22516 out.go:177] Feb 29 17:57:29 ingress-addon-legacy-180742 kubelet[6024]: F0229 17:57:29.256416 6024 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
I0229 17:57:36.646647 22516 out.go:177] Feb 29 17:57:30 ingress-addon-legacy-180742 kubelet[6050]: F0229 17:57:30.511750 6050 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
I0229 17:57:36.649232 22516 out.go:177]
W0229 17:57:36.650532 22516 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0229 17:55:39.980404 3535 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 17:55:41.045820 3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 17:55:41.047318 3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0229 17:55:39.980404 3535 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 17:55:41.045820 3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 17:55:41.047318 3535 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0229 17:57:36.650604 22516 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0229 17:57:36.650640 22516 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0229 17:57:36.652209 22516 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-linux-amd64 start -p ingress-addon-legacy-180742 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 --container-runtime=containerd" : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (297.13s)