=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-linux-amd64 start -p ingress-addon-legacy-671566 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 --container-runtime=containerd
E0229 01:22:12.442201 316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:24:28.597002 316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:24:56.283941 316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/addons-026134/client.crt: no such file or directory
E0229 01:26:14.620399 316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:14.626321 316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:14.636596 316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:14.656863 316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:14.697120 316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:14.777453 316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:14.937856 316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:15.258496 316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:15.899469 316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:17.179963 316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:19.741127 316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:24.862028 316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:35.103138 316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
E0229 01:26:55.584000 316336 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/functional-601906/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ingress-addon-legacy-671566 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 --container-runtime=containerd: exit status 109 (4m50.984566235s)
-- stdout --
* [ingress-addon-legacy-671566] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=18063
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on user configuration
* Starting control plane node ingress-addon-legacy-671566 in cluster ingress-addon-legacy-671566
* Downloading Kubernetes v1.18.20 preload ...
* Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
* Preparing Kubernetes v1.18.20 on containerd 1.7.11 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
X Problems detected in kubelet:
Feb 29 01:26:54 ingress-addon-legacy-671566 kubelet[6134]: F0229 01:26:54.305038 6134 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
Feb 29 01:26:55 ingress-addon-legacy-671566 kubelet[6161]: F0229 01:26:55.554030 6161 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
Feb 29 01:26:56 ingress-addon-legacy-671566 kubelet[6187]: F0229 01:26:56.762409 6187 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
-- /stdout --
** stderr **
I0229 01:22:11.898503 325441 out.go:291] Setting OutFile to fd 1 ...
I0229 01:22:11.898776 325441 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:22:11.898787 325441 out.go:304] Setting ErrFile to fd 2...
I0229 01:22:11.898794 325441 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 01:22:11.899020 325441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18063-309085/.minikube/bin
I0229 01:22:11.899659 325441 out.go:298] Setting JSON to false
I0229 01:22:11.900691 325441 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3876,"bootTime":1709165856,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0229 01:22:11.900760 325441 start.go:139] virtualization: kvm guest
I0229 01:22:11.902731 325441 out.go:177] * [ingress-addon-legacy-671566] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
I0229 01:22:11.904188 325441 out.go:177] - MINIKUBE_LOCATION=18063
I0229 01:22:11.905327 325441 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0229 01:22:11.904148 325441 notify.go:220] Checking for updates...
I0229 01:22:11.907415 325441 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/18063-309085/kubeconfig
I0229 01:22:11.908645 325441 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/18063-309085/.minikube
I0229 01:22:11.909769 325441 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0229 01:22:11.910844 325441 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0229 01:22:11.912164 325441 driver.go:392] Setting default libvirt URI to qemu:///system
I0229 01:22:11.945211 325441 out.go:177] * Using the kvm2 driver based on user configuration
I0229 01:22:11.946186 325441 start.go:299] selected driver: kvm2
I0229 01:22:11.946198 325441 start.go:903] validating driver "kvm2" against <nil>
I0229 01:22:11.946211 325441 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0229 01:22:11.946937 325441 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0229 01:22:11.947028 325441 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18063-309085/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0229 01:22:11.961259 325441 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
I0229 01:22:11.961307 325441 start_flags.go:309] no existing cluster config was found, will generate one from the flags
I0229 01:22:11.961532 325441 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0229 01:22:11.961615 325441 cni.go:84] Creating CNI manager for ""
I0229 01:22:11.961632 325441 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0229 01:22:11.961647 325441 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0229 01:22:11.961659 325441 start_flags.go:323] config:
{Name:ingress-addon-legacy-671566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-671566 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0229 01:22:11.961846 325441 iso.go:125] acquiring lock: {Name:mk1a6013324bd96d611c2de882ca0af6f4df38f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0229 01:22:11.963400 325441 out.go:177] * Starting control plane node ingress-addon-legacy-671566 in cluster ingress-addon-legacy-671566
I0229 01:22:11.964534 325441 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
I0229 01:22:12.463360 325441 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4
I0229 01:22:12.463393 325441 cache.go:56] Caching tarball of preloaded images
I0229 01:22:12.463594 325441 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
I0229 01:22:12.466050 325441 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0229 01:22:12.467253 325441 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 ...
I0229 01:22:12.577567 325441 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4?checksum=md5:b585eebe982180189fed21f0bd283cca -> /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4
I0229 01:22:32.878632 325441 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 ...
I0229 01:22:32.878729 325441 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 ...
I0229 01:22:33.948296 325441 cache.go:59] Finished verifying existence of preloaded tar for v1.18.20 on containerd
I0229 01:22:33.948632 325441 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/config.json ...
I0229 01:22:33.948663 325441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/config.json: {Name:mk9b97164cd8f4f8241d6ee97e5ecd8f0f0f5077 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 01:22:33.948856 325441 start.go:365] acquiring machines lock for ingress-addon-legacy-671566: {Name:mk8de78527e9cb979575b614e5d893b33768243a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0229 01:22:33.948889 325441 start.go:369] acquired machines lock for "ingress-addon-legacy-671566" in 18.053µs
I0229 01:22:33.948906 325441 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-671566 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-671566 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0229 01:22:33.948983 325441 start.go:125] createHost starting for "" (driver="kvm2")
I0229 01:22:33.951664 325441 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I0229 01:22:33.951825 325441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0229 01:22:33.951865 325441 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 01:22:33.967099 325441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44677
I0229 01:22:33.967562 325441 main.go:141] libmachine: () Calling .GetVersion
I0229 01:22:33.968109 325441 main.go:141] libmachine: Using API Version 1
I0229 01:22:33.968131 325441 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 01:22:33.968476 325441 main.go:141] libmachine: () Calling .GetMachineName
I0229 01:22:33.968697 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetMachineName
I0229 01:22:33.968875 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
I0229 01:22:33.969010 325441 start.go:159] libmachine.API.Create for "ingress-addon-legacy-671566" (driver="kvm2")
I0229 01:22:33.969035 325441 client.go:168] LocalClient.Create starting
I0229 01:22:33.969070 325441 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem
I0229 01:22:33.969110 325441 main.go:141] libmachine: Decoding PEM data...
I0229 01:22:33.969133 325441 main.go:141] libmachine: Parsing certificate...
I0229 01:22:33.969211 325441 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem
I0229 01:22:33.969243 325441 main.go:141] libmachine: Decoding PEM data...
I0229 01:22:33.969263 325441 main.go:141] libmachine: Parsing certificate...
I0229 01:22:33.969289 325441 main.go:141] libmachine: Running pre-create checks...
I0229 01:22:33.969304 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .PreCreateCheck
I0229 01:22:33.969642 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetConfigRaw
I0229 01:22:33.969983 325441 main.go:141] libmachine: Creating machine...
I0229 01:22:33.969999 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .Create
I0229 01:22:33.970139 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Creating KVM machine...
I0229 01:22:33.971316 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found existing default KVM network
I0229 01:22:33.972148 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:33.972013 325520 network.go:207] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d960}
I0229 01:22:33.977048 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | trying to create private KVM network mk-ingress-addon-legacy-671566 192.168.39.0/24...
I0229 01:22:34.040799 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | private KVM network mk-ingress-addon-legacy-671566 192.168.39.0/24 created
I0229 01:22:34.040845 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:34.040768 325520 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18063-309085/.minikube
I0229 01:22:34.040860 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Setting up store path in /home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566 ...
I0229 01:22:34.040878 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Building disk image from file:///home/jenkins/minikube-integration/18063-309085/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
I0229 01:22:34.041023 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Downloading /home/jenkins/minikube-integration/18063-309085/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18063-309085/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
I0229 01:22:34.289385 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:34.289248 325520 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/id_rsa...
I0229 01:22:34.490112 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:34.489954 325520 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/ingress-addon-legacy-671566.rawdisk...
I0229 01:22:34.490159 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Writing magic tar header
I0229 01:22:34.490203 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Writing SSH key tar header
I0229 01:22:34.490221 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:34.490071 325520 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566 ...
I0229 01:22:34.490240 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566
I0229 01:22:34.490251 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566 (perms=drwx------)
I0229 01:22:34.490261 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085/.minikube/machines
I0229 01:22:34.490274 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085/.minikube/machines (perms=drwxr-xr-x)
I0229 01:22:34.490294 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085/.minikube (perms=drwxr-xr-x)
I0229 01:22:34.490307 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Setting executable bit set on /home/jenkins/minikube-integration/18063-309085 (perms=drwxrwxr-x)
I0229 01:22:34.490316 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085/.minikube
I0229 01:22:34.490331 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18063-309085
I0229 01:22:34.490340 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0229 01:22:34.490348 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Checking permissions on dir: /home/jenkins
I0229 01:22:34.490357 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0229 01:22:34.490373 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0229 01:22:34.490386 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Checking permissions on dir: /home
I0229 01:22:34.490398 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Creating domain...
I0229 01:22:34.490411 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Skipping /home - not owner
I0229 01:22:34.491421 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) define libvirt domain using xml:
I0229 01:22:34.491449 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <domain type='kvm'>
I0229 01:22:34.491462 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <name>ingress-addon-legacy-671566</name>
I0229 01:22:34.491473 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <memory unit='MiB'>4096</memory>
I0229 01:22:34.491490 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <vcpu>2</vcpu>
I0229 01:22:34.491501 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <features>
I0229 01:22:34.491509 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <acpi/>
I0229 01:22:34.491520 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <apic/>
I0229 01:22:34.491528 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <pae/>
I0229 01:22:34.491536 325441 main.go:141] libmachine: (ingress-addon-legacy-671566)
I0229 01:22:34.491542 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) </features>
I0229 01:22:34.491550 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <cpu mode='host-passthrough'>
I0229 01:22:34.491558 325441 main.go:141] libmachine: (ingress-addon-legacy-671566)
I0229 01:22:34.491564 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) </cpu>
I0229 01:22:34.491570 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <os>
I0229 01:22:34.491579 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <type>hvm</type>
I0229 01:22:34.491599 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <boot dev='cdrom'/>
I0229 01:22:34.491613 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <boot dev='hd'/>
I0229 01:22:34.491620 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <bootmenu enable='no'/>
I0229 01:22:34.491629 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) </os>
I0229 01:22:34.491637 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <devices>
I0229 01:22:34.491647 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <disk type='file' device='cdrom'>
I0229 01:22:34.491667 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <source file='/home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/boot2docker.iso'/>
I0229 01:22:34.491685 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <target dev='hdc' bus='scsi'/>
I0229 01:22:34.491692 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <readonly/>
I0229 01:22:34.491704 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) </disk>
I0229 01:22:34.491714 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <disk type='file' device='disk'>
I0229 01:22:34.491724 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <driver name='qemu' type='raw' cache='default' io='threads' />
I0229 01:22:34.491738 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <source file='/home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/ingress-addon-legacy-671566.rawdisk'/>
I0229 01:22:34.491753 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <target dev='hda' bus='virtio'/>
I0229 01:22:34.491763 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) </disk>
I0229 01:22:34.491776 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <interface type='network'>
I0229 01:22:34.491791 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <source network='mk-ingress-addon-legacy-671566'/>
I0229 01:22:34.491804 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <model type='virtio'/>
I0229 01:22:34.491811 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) </interface>
I0229 01:22:34.491821 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <interface type='network'>
I0229 01:22:34.491832 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <source network='default'/>
I0229 01:22:34.491845 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <model type='virtio'/>
I0229 01:22:34.491859 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) </interface>
I0229 01:22:34.491872 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <serial type='pty'>
I0229 01:22:34.491882 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <target port='0'/>
I0229 01:22:34.491890 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) </serial>
I0229 01:22:34.491897 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <console type='pty'>
I0229 01:22:34.491906 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <target type='serial' port='0'/>
I0229 01:22:34.491917 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) </console>
I0229 01:22:34.491929 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <rng model='virtio'>
I0229 01:22:34.491943 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) <backend model='random'>/dev/random</backend>
I0229 01:22:34.491958 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) </rng>
I0229 01:22:34.491969 325441 main.go:141] libmachine: (ingress-addon-legacy-671566)
I0229 01:22:34.491983 325441 main.go:141] libmachine: (ingress-addon-legacy-671566)
I0229 01:22:34.491990 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) </devices>
I0229 01:22:34.492004 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) </domain>
I0229 01:22:34.492041 325441 main.go:141] libmachine: (ingress-addon-legacy-671566)
I0229 01:22:34.495930 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:7e:da:21 in network default
I0229 01:22:34.496528 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Ensuring networks are active...
I0229 01:22:34.496559 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:34.497242 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Ensuring network default is active
I0229 01:22:34.497536 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Ensuring network mk-ingress-addon-legacy-671566 is active
I0229 01:22:34.498058 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Getting domain xml...
I0229 01:22:34.498694 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Creating domain...
I0229 01:22:35.671709 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Waiting to get IP...
I0229 01:22:35.672637 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:35.673049 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
I0229 01:22:35.673093 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:35.673015 325520 retry.go:31] will retry after 285.941832ms: waiting for machine to come up
I0229 01:22:35.960467 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:35.960897 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
I0229 01:22:35.960928 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:35.960863 325520 retry.go:31] will retry after 243.277464ms: waiting for machine to come up
I0229 01:22:36.205244 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:36.205643 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
I0229 01:22:36.205671 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:36.205592 325520 retry.go:31] will retry after 418.531661ms: waiting for machine to come up
I0229 01:22:36.626173 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:36.626689 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
I0229 01:22:36.626718 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:36.626638 325520 retry.go:31] will retry after 468.757069ms: waiting for machine to come up
I0229 01:22:37.097171 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:37.097625 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
I0229 01:22:37.097656 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:37.097553 325520 retry.go:31] will retry after 516.742124ms: waiting for machine to come up
I0229 01:22:37.616345 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:37.616783 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
I0229 01:22:37.616807 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:37.616724 325520 retry.go:31] will retry after 840.859173ms: waiting for machine to come up
I0229 01:22:38.458829 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:38.459252 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
I0229 01:22:38.459290 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:38.459183 325520 retry.go:31] will retry after 1.160952675s: waiting for machine to come up
I0229 01:22:39.621904 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:39.622419 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
I0229 01:22:39.622447 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:39.622367 325520 retry.go:31] will retry after 981.893154ms: waiting for machine to come up
I0229 01:22:40.605788 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:40.606261 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
I0229 01:22:40.606297 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:40.606226 325520 retry.go:31] will retry after 1.784036247s: waiting for machine to come up
I0229 01:22:42.393173 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:42.393618 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
I0229 01:22:42.393646 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:42.393566 325520 retry.go:31] will retry after 1.544306192s: waiting for machine to come up
I0229 01:22:43.940353 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:43.940812 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
I0229 01:22:43.940848 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:43.940763 325520 retry.go:31] will retry after 2.046404556s: waiting for machine to come up
I0229 01:22:45.988347 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:45.988784 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
I0229 01:22:45.988803 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:45.988741 325520 retry.go:31] will retry after 2.82311181s: waiting for machine to come up
I0229 01:22:48.815601 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:48.815977 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
I0229 01:22:48.816003 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:48.815935 325520 retry.go:31] will retry after 3.058609083s: waiting for machine to come up
I0229 01:22:51.878438 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:51.878941 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find current IP address of domain ingress-addon-legacy-671566 in network mk-ingress-addon-legacy-671566
I0229 01:22:51.878972 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | I0229 01:22:51.878906 325520 retry.go:31] will retry after 3.449863463s: waiting for machine to come up
I0229 01:22:55.330867 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:55.331353 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Found IP for machine: 192.168.39.248
I0229 01:22:55.331380 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Reserving static IP address...
I0229 01:22:55.331397 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has current primary IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:55.331756 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-671566", mac: "52:54:00:3b:c8:ec", ip: "192.168.39.248"} in network mk-ingress-addon-legacy-671566
I0229 01:22:55.401104 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Getting to WaitForSSH function...
I0229 01:22:55.401138 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Reserved static IP address: 192.168.39.248
I0229 01:22:55.401153 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Waiting for SSH to be available...
I0229 01:22:55.403683 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:55.404141 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3b:c8:ec}
I0229 01:22:55.404173 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:55.404390 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Using SSH client type: external
I0229 01:22:55.404431 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Using SSH private key: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/id_rsa (-rw-------)
I0229 01:22:55.404477 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.248 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/id_rsa -p 22] /usr/bin/ssh <nil>}
I0229 01:22:55.404498 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | About to run SSH command:
I0229 01:22:55.404514 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | exit 0
I0229 01:22:55.529946 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | SSH cmd err, output: <nil>:
I0229 01:22:55.530187 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) KVM machine creation complete!
I0229 01:22:55.530509 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetConfigRaw
I0229 01:22:55.531058 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
I0229 01:22:55.531263 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
I0229 01:22:55.531417 325441 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0229 01:22:55.531434 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetState
I0229 01:22:55.532486 325441 main.go:141] libmachine: Detecting operating system of created instance...
I0229 01:22:55.532502 325441 main.go:141] libmachine: Waiting for SSH to be available...
I0229 01:22:55.532507 325441 main.go:141] libmachine: Getting to WaitForSSH function...
I0229 01:22:55.532513 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
I0229 01:22:55.534723 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:55.535084 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
I0229 01:22:55.535113 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:55.535257 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
I0229 01:22:55.535466 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
I0229 01:22:55.535594 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
I0229 01:22:55.535697 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
I0229 01:22:55.535838 325441 main.go:141] libmachine: Using SSH client type: native
I0229 01:22:55.536034 325441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.39.248 22 <nil> <nil>}
I0229 01:22:55.536047 325441 main.go:141] libmachine: About to run SSH command:
exit 0
I0229 01:22:55.637123 325441 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0229 01:22:55.637142 325441 main.go:141] libmachine: Detecting the provisioner...
I0229 01:22:55.637150 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
I0229 01:22:55.639921 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:55.640193 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
I0229 01:22:55.640218 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:55.640354 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
I0229 01:22:55.640525 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
I0229 01:22:55.640719 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
I0229 01:22:55.640899 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
I0229 01:22:55.641071 325441 main.go:141] libmachine: Using SSH client type: native
I0229 01:22:55.641251 325441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.39.248 22 <nil> <nil>}
I0229 01:22:55.641262 325441 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0229 01:22:55.742934 325441 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0229 01:22:55.743031 325441 main.go:141] libmachine: found compatible host: buildroot
I0229 01:22:55.743047 325441 main.go:141] libmachine: Provisioning with buildroot...
I0229 01:22:55.743059 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetMachineName
I0229 01:22:55.743311 325441 buildroot.go:166] provisioning hostname "ingress-addon-legacy-671566"
I0229 01:22:55.743337 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetMachineName
I0229 01:22:55.743547 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
I0229 01:22:55.746182 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:55.746628 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
I0229 01:22:55.746664 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:55.746774 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
I0229 01:22:55.746943 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
I0229 01:22:55.747063 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
I0229 01:22:55.747218 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
I0229 01:22:55.747358 325441 main.go:141] libmachine: Using SSH client type: native
I0229 01:22:55.747557 325441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.39.248 22 <nil> <nil>}
I0229 01:22:55.747576 325441 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-671566 && echo "ingress-addon-legacy-671566" | sudo tee /etc/hostname
I0229 01:22:55.866916 325441 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-671566
I0229 01:22:55.866940 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
I0229 01:22:55.869460 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:55.869773 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
I0229 01:22:55.869802 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:55.869961 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
I0229 01:22:55.870143 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
I0229 01:22:55.870309 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
I0229 01:22:55.870483 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
I0229 01:22:55.870650 325441 main.go:141] libmachine: Using SSH client type: native
I0229 01:22:55.870836 325441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.39.248 22 <nil> <nil>}
I0229 01:22:55.870863 325441 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-671566' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-671566/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-671566' | sudo tee -a /etc/hosts;
fi
fi
I0229 01:22:55.985468 325441 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0229 01:22:55.985496 325441 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18063-309085/.minikube CaCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18063-309085/.minikube}
I0229 01:22:55.985556 325441 buildroot.go:174] setting up certificates
I0229 01:22:55.985571 325441 provision.go:83] configureAuth start
I0229 01:22:55.985587 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetMachineName
I0229 01:22:55.985820 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetIP
I0229 01:22:55.987970 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:55.988276 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
I0229 01:22:55.988303 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:55.988522 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
I0229 01:22:55.990648 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:55.990957 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
I0229 01:22:55.990983 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:55.991126 325441 provision.go:138] copyHostCerts
I0229 01:22:55.991156 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem
I0229 01:22:55.991187 325441 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem, removing ...
I0229 01:22:55.991211 325441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem
I0229 01:22:55.991285 325441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/ca.pem (1082 bytes)
I0229 01:22:55.991384 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem
I0229 01:22:55.991409 325441 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem, removing ...
I0229 01:22:55.991418 325441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem
I0229 01:22:55.991453 325441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/cert.pem (1123 bytes)
I0229 01:22:55.991511 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem
I0229 01:22:55.991530 325441 exec_runner.go:144] found /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem, removing ...
I0229 01:22:55.991539 325441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem
I0229 01:22:55.991573 325441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18063-309085/.minikube/key.pem (1675 bytes)
I0229 01:22:55.991633 325441 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-671566 san=[192.168.39.248 192.168.39.248 localhost 127.0.0.1 minikube ingress-addon-legacy-671566]
I0229 01:22:56.081838 325441 provision.go:172] copyRemoteCerts
I0229 01:22:56.081889 325441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0229 01:22:56.081908 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
I0229 01:22:56.083810 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:56.084035 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
I0229 01:22:56.084061 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:56.084175 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
I0229 01:22:56.084354 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
I0229 01:22:56.084511 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
I0229 01:22:56.084616 325441 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/id_rsa Username:docker}
I0229 01:22:56.164896 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0229 01:22:56.164960 325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0229 01:22:56.194275 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem -> /etc/docker/server.pem
I0229 01:22:56.194314 325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0229 01:22:56.222729 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0229 01:22:56.222777 325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0229 01:22:56.251228 325441 provision.go:86] duration metric: configureAuth took 265.646641ms
I0229 01:22:56.251251 325441 buildroot.go:189] setting minikube options for container-runtime
I0229 01:22:56.251418 325441 config.go:182] Loaded profile config "ingress-addon-legacy-671566": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
I0229 01:22:56.251447 325441 main.go:141] libmachine: Checking connection to Docker...
I0229 01:22:56.251464 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetURL
I0229 01:22:56.252480 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | Using libvirt version 6000000
I0229 01:22:56.254238 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:56.254550 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
I0229 01:22:56.254583 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:56.254702 325441 main.go:141] libmachine: Docker is up and running!
I0229 01:22:56.254716 325441 main.go:141] libmachine: Reticulating splines...
I0229 01:22:56.254725 325441 client.go:171] LocalClient.Create took 22.285681229s
I0229 01:22:56.254750 325441 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-671566" took 22.285740098s
I0229 01:22:56.254764 325441 start.go:300] post-start starting for "ingress-addon-legacy-671566" (driver="kvm2")
I0229 01:22:56.254778 325441 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0229 01:22:56.254801 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
I0229 01:22:56.255023 325441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0229 01:22:56.255045 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
I0229 01:22:56.256772 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:56.257034 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
I0229 01:22:56.257059 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:56.257175 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
I0229 01:22:56.257358 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
I0229 01:22:56.257510 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
I0229 01:22:56.257629 325441 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/id_rsa Username:docker}
I0229 01:22:56.335980 325441 ssh_runner.go:195] Run: cat /etc/os-release
I0229 01:22:56.340541 325441 info.go:137] Remote host: Buildroot 2023.02.9
I0229 01:22:56.340567 325441 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/addons for local assets ...
I0229 01:22:56.340622 325441 filesync.go:126] Scanning /home/jenkins/minikube-integration/18063-309085/.minikube/files for local assets ...
I0229 01:22:56.340721 325441 filesync.go:149] local asset: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem -> 3163362.pem in /etc/ssl/certs
I0229 01:22:56.340735 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem -> /etc/ssl/certs/3163362.pem
I0229 01:22:56.340851 325441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0229 01:22:56.350377 325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /etc/ssl/certs/3163362.pem (1708 bytes)
I0229 01:22:56.375480 325441 start.go:303] post-start completed in 120.702833ms
I0229 01:22:56.375525 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetConfigRaw
I0229 01:22:56.376015 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetIP
I0229 01:22:56.378311 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:56.378615 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
I0229 01:22:56.378646 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:56.378848 325441 profile.go:148] Saving config to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/config.json ...
I0229 01:22:56.379006 325441 start.go:128] duration metric: createHost completed in 22.43001345s
I0229 01:22:56.379027 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
I0229 01:22:56.381034 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:56.381348 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
I0229 01:22:56.381384 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:56.381472 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
I0229 01:22:56.381671 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
I0229 01:22:56.381819 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
I0229 01:22:56.381937 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
I0229 01:22:56.382073 325441 main.go:141] libmachine: Using SSH client type: native
I0229 01:22:56.382296 325441 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.39.248 22 <nil> <nil>}
I0229 01:22:56.382309 325441 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0229 01:22:56.482848 325441 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709169776.449188134
I0229 01:22:56.482874 325441 fix.go:206] guest clock: 1709169776.449188134
I0229 01:22:56.482884 325441 fix.go:219] Guest: 2024-02-29 01:22:56.449188134 +0000 UTC Remote: 2024-02-29 01:22:56.379016613 +0000 UTC m=+44.530393722 (delta=70.171521ms)
I0229 01:22:56.482910 325441 fix.go:190] guest clock delta is within tolerance: 70.171521ms
I0229 01:22:56.482917 325441 start.go:83] releasing machines lock for "ingress-addon-legacy-671566", held for 22.534018745s
I0229 01:22:56.482942 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
I0229 01:22:56.483195 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetIP
I0229 01:22:56.485724 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:56.486048 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
I0229 01:22:56.486094 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:56.486267 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
I0229 01:22:56.486801 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
I0229 01:22:56.486954 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .DriverName
I0229 01:22:56.487048 325441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0229 01:22:56.487090 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
I0229 01:22:56.487149 325441 ssh_runner.go:195] Run: cat /version.json
I0229 01:22:56.487176 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHHostname
I0229 01:22:56.489465 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:56.489672 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:56.489812 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
I0229 01:22:56.489841 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:56.489964 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
I0229 01:22:56.490053 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
I0229 01:22:56.490099 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:56.490142 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
I0229 01:22:56.490222 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHPort
I0229 01:22:56.490294 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
I0229 01:22:56.490376 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHKeyPath
I0229 01:22:56.490440 325441 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/id_rsa Username:docker}
I0229 01:22:56.490483 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetSSHUsername
I0229 01:22:56.490581 325441 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18063-309085/.minikube/machines/ingress-addon-legacy-671566/id_rsa Username:docker}
I0229 01:22:56.587021 325441 ssh_runner.go:195] Run: systemctl --version
I0229 01:22:56.593100 325441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0229 01:22:56.599237 325441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0229 01:22:56.599307 325441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0229 01:22:56.622660 325441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0229 01:22:56.622688 325441 start.go:475] detecting cgroup driver to use...
I0229 01:22:56.622771 325441 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0229 01:22:56.651217 325441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0229 01:22:56.664580 325441 docker.go:217] disabling cri-docker service (if available) ...
I0229 01:22:56.664640 325441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0229 01:22:56.678244 325441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0229 01:22:56.691547 325441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0229 01:22:56.804511 325441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0229 01:22:56.963930 325441 docker.go:233] disabling docker service ...
I0229 01:22:56.964005 325441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0229 01:22:56.980165 325441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0229 01:22:56.992964 325441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0229 01:22:57.117247 325441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0229 01:22:57.245005 325441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0229 01:22:57.260503 325441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0229 01:22:57.279987 325441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0229 01:22:57.291963 325441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0229 01:22:57.302572 325441 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0229 01:22:57.302635 325441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0229 01:22:57.312953 325441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0229 01:22:57.323230 325441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0229 01:22:57.333501 325441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0229 01:22:57.343860 325441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0229 01:22:57.354662 325441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0229 01:22:57.365194 325441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0229 01:22:57.374535 325441 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0229 01:22:57.374597 325441 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0229 01:22:57.387947 325441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0229 01:22:57.397423 325441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0229 01:22:57.510128 325441 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0229 01:22:57.538270 325441 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
I0229 01:22:57.538363 325441 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0229 01:22:57.543171 325441 retry.go:31] will retry after 703.505308ms: stat /run/containerd/containerd.sock: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
I0229 01:22:58.247068 325441 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0229 01:22:58.252856 325441 start.go:543] Will wait 60s for crictl version
I0229 01:22:58.252908 325441 ssh_runner.go:195] Run: which crictl
I0229 01:22:58.257073 325441 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0229 01:22:58.293694 325441 start.go:559] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.11
RuntimeApiVersion: v1
I0229 01:22:58.293769 325441 ssh_runner.go:195] Run: containerd --version
I0229 01:22:58.323549 325441 ssh_runner.go:195] Run: containerd --version
I0229 01:22:58.353761 325441 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.7.11 ...
I0229 01:22:58.355098 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) Calling .GetIP
I0229 01:22:58.357661 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:58.358013 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:c8:ec", ip: ""} in network mk-ingress-addon-legacy-671566: {Iface:virbr1 ExpiryTime:2024-02-29 02:22:49 +0000 UTC Type:0 Mac:52:54:00:3b:c8:ec Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ingress-addon-legacy-671566 Clientid:01:52:54:00:3b:c8:ec}
I0229 01:22:58.358032 325441 main.go:141] libmachine: (ingress-addon-legacy-671566) DBG | domain ingress-addon-legacy-671566 has defined IP address 192.168.39.248 and MAC address 52:54:00:3b:c8:ec in network mk-ingress-addon-legacy-671566
I0229 01:22:58.358259 325441 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0229 01:22:58.362744 325441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0229 01:22:58.375869 325441 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
I0229 01:22:58.375920 325441 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 01:22:58.407351 325441 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
I0229 01:22:58.407414 325441 ssh_runner.go:195] Run: which lz4
I0229 01:22:58.411355 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0229 01:22:58.411417 325441 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0229 01:22:58.415875 325441 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0229 01:22:58.415910 325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (494845061 bytes)
I0229 01:23:00.182242 325441 containerd.go:548] Took 1.770827 seconds to copy over tarball
I0229 01:23:00.182315 325441 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0229 01:23:03.140719 325441 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.958371689s)
I0229 01:23:03.140751 325441 containerd.go:555] Took 2.958481 seconds to extract the tarball
I0229 01:23:03.140761 325441 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0229 01:23:03.189078 325441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0229 01:23:03.307525 325441 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0229 01:23:03.339197 325441 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 01:23:03.386958 325441 retry.go:31] will retry after 361.248235ms: sudo crictl images --output json: Process exited with status 1
stdout:
stderr:
time="2024-02-29T01:23:03Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
I0229 01:23:03.748510 325441 ssh_runner.go:195] Run: sudo crictl images --output json
I0229 01:23:03.792799 325441 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
I0229 01:23:03.792825 325441 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
I0229 01:23:03.792928 325441 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
I0229 01:23:03.792963 325441 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
I0229 01:23:03.792971 325441 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
I0229 01:23:03.792961 325441 image.go:134] retrieving image: registry.k8s.io/pause:3.2
I0229 01:23:03.792891 325441 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
I0229 01:23:03.792943 325441 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I0229 01:23:03.792919 325441 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0229 01:23:03.792998 325441 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
I0229 01:23:03.794209 325441 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
I0229 01:23:03.794219 325441 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
I0229 01:23:03.794213 325441 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
I0229 01:23:03.794305 325441 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
I0229 01:23:03.794337 325441 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I0229 01:23:03.794404 325441 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0229 01:23:03.794431 325441 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
I0229 01:23:03.794425 325441 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
I0229 01:23:03.952517 325441 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.18.20" and sha "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346"
I0229 01:23:03.952583 325441 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0229 01:23:03.967428 325441 containerd.go:252] Checking existence of image with name "registry.k8s.io/pause:3.2" and sha "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c"
I0229 01:23:03.967481 325441 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0229 01:23:04.048733 325441 containerd.go:252] Checking existence of image with name "registry.k8s.io/etcd:3.4.3-0" and sha "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f"
I0229 01:23:04.048821 325441 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0229 01:23:04.096864 325441 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.18.20" and sha "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290"
I0229 01:23:04.096966 325441 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0229 01:23:04.104045 325441 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.18.20" and sha "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1"
I0229 01:23:04.104108 325441 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0229 01:23:04.108091 325441 containerd.go:252] Checking existence of image with name "registry.k8s.io/coredns:1.6.7" and sha "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5"
I0229 01:23:04.108156 325441 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0229 01:23:04.132898 325441 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.18.20" and sha "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba"
I0229 01:23:04.132980 325441 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0229 01:23:04.308449 325441 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
I0229 01:23:04.308502 325441 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
I0229 01:23:04.308552 325441 ssh_runner.go:195] Run: which crictl
I0229 01:23:04.501995 325441 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0229 01:23:04.502056 325441 cri.go:218] Removing image: registry.k8s.io/pause:3.2
I0229 01:23:04.502136 325441 ssh_runner.go:195] Run: which crictl
I0229 01:23:04.645601 325441 containerd.go:252] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
I0229 01:23:04.645683 325441 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
I0229 01:23:04.869435 325441 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
I0229 01:23:04.869493 325441 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
I0229 01:23:04.869552 325441 ssh_runner.go:195] Run: which crictl
I0229 01:23:05.132347 325441 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.035345165s)
I0229 01:23:05.132436 325441 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
I0229 01:23:05.132479 325441 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
I0229 01:23:05.132542 325441 ssh_runner.go:195] Run: which crictl
I0229 01:23:05.164353 325441 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.060212589s)
I0229 01:23:05.164439 325441 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
I0229 01:23:05.164489 325441 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
I0229 01:23:05.164544 325441 ssh_runner.go:195] Run: which crictl
I0229 01:23:05.164937 325441 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.056752568s)
I0229 01:23:05.165010 325441 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
I0229 01:23:05.165048 325441 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
I0229 01:23:05.165106 325441 ssh_runner.go:195] Run: which crictl
I0229 01:23:05.269069 325441 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
I0229 01:23:05.269134 325441 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
I0229 01:23:05.271191 325441 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.13817999s)
I0229 01:23:05.271276 325441 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
I0229 01:23:05.271315 325441 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
I0229 01:23:05.271349 325441 ssh_runner.go:195] Run: which crictl
I0229 01:23:05.339705 325441 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
I0229 01:23:05.339757 325441 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
I0229 01:23:05.339802 325441 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
I0229 01:23:05.339858 325441 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
I0229 01:23:05.447015 325441 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
I0229 01:23:05.447098 325441 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
I0229 01:23:05.447600 325441 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
I0229 01:23:05.473799 325441 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
I0229 01:23:05.473893 325441 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
I0229 01:23:05.473924 325441 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
I0229 01:23:05.474009 325441 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I0229 01:23:05.500925 325441 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
I0229 01:23:05.500976 325441 cache_images.go:92] LoadImages completed in 1.70813875s
W0229 01:23:05.501048 325441 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18063-309085/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
I0229 01:23:05.501096 325441 ssh_runner.go:195] Run: sudo crictl info
I0229 01:23:05.536285 325441 cni.go:84] Creating CNI manager for ""
I0229 01:23:05.536308 325441 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
I0229 01:23:05.536333 325441 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0229 01:23:05.536357 325441 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.248 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-671566 NodeName:ingress-addon-legacy-671566 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minik
ube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0229 01:23:05.536538 325441 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.248
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "ingress-addon-legacy-671566"
kubeletExtraArgs:
node-ip: 192.168.39.248
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.248"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0229 01:23:05.536633 325441 kubeadm.go:976] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-671566 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.248
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-671566 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0229 01:23:05.536685 325441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0229 01:23:05.547625 325441 binaries.go:44] Found k8s binaries, skipping transfer
I0229 01:23:05.547684 325441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0229 01:23:05.557771 325441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
I0229 01:23:05.575886 325441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0229 01:23:05.594000 325441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2137 bytes)
I0229 01:23:05.612719 325441 ssh_runner.go:195] Run: grep 192.168.39.248 control-plane.minikube.internal$ /etc/hosts
I0229 01:23:05.617082 325441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.248 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0229 01:23:05.630507 325441 certs.go:56] Setting up /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566 for IP: 192.168.39.248
I0229 01:23:05.630574 325441 certs.go:190] acquiring lock for shared ca certs: {Name:mkd93205d1e0ff28501dacf7d21e224f19de9501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 01:23:05.630747 325441 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key
I0229 01:23:05.630812 325441 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key
I0229 01:23:05.630870 325441 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/client.key
I0229 01:23:05.630887 325441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/client.crt with IP's: []
I0229 01:23:05.710958 325441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/client.crt ...
I0229 01:23:05.710992 325441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/client.crt: {Name:mkfa226b1bdfa793718014ec2b328d9ffdcc4cf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 01:23:05.711174 325441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/client.key ...
I0229 01:23:05.711204 325441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/client.key: {Name:mkea2f79a37bc3b329676ae862b60640c1b92162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 01:23:05.711305 325441 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.key.25b71a70
I0229 01:23:05.711330 325441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.crt.25b71a70 with IP's: [192.168.39.248 10.96.0.1 127.0.0.1 10.0.0.1]
I0229 01:23:05.797945 325441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.crt.25b71a70 ...
I0229 01:23:05.797977 325441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.crt.25b71a70: {Name:mkbf8ee2698e7d138cb6c86bf8794cd65ae8565a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 01:23:05.798161 325441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.key.25b71a70 ...
I0229 01:23:05.798179 325441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.key.25b71a70: {Name:mk75ced3308e182f8a25a9ef5be4a614f1a09603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 01:23:05.798269 325441 certs.go:337] copying /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.crt.25b71a70 -> /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.crt
I0229 01:23:05.798380 325441 certs.go:341] copying /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.key.25b71a70 -> /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.key
I0229 01:23:05.798456 325441 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.key
I0229 01:23:05.798476 325441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.crt with IP's: []
I0229 01:23:06.086294 325441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.crt ...
I0229 01:23:06.086327 325441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.crt: {Name:mk42c3f9895eb9d10a1b3cdbfc85c614b4e5f116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 01:23:06.086503 325441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.key ...
I0229 01:23:06.086521 325441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.key: {Name:mk07322cbdd7c3244d9d7b10ccbd63e80f2c1f19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 01:23:06.086612 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0229 01:23:06.086648 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0229 01:23:06.086676 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0229 01:23:06.086699 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0229 01:23:06.086716 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0229 01:23:06.086730 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0229 01:23:06.086744 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0229 01:23:06.086761 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0229 01:23:06.086833 325441 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem (1338 bytes)
W0229 01:23:06.086891 325441 certs.go:433] ignoring /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336_empty.pem, impossibly tiny 0 bytes
I0229 01:23:06.086910 325441 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca-key.pem (1679 bytes)
I0229 01:23:06.086950 325441 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/ca.pem (1082 bytes)
I0229 01:23:06.086985 325441 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/cert.pem (1123 bytes)
I0229 01:23:06.087013 325441 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/home/jenkins/minikube-integration/18063-309085/.minikube/certs/key.pem (1675 bytes)
I0229 01:23:06.087072 325441 certs.go:437] found cert: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem (1708 bytes)
I0229 01:23:06.087116 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem -> /usr/share/ca-certificates/3163362.pem
I0229 01:23:06.087137 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0229 01:23:06.087154 325441 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem -> /usr/share/ca-certificates/316336.pem
I0229 01:23:06.087779 325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0229 01:23:06.116460 325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0229 01:23:06.142592 325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0229 01:23:06.168422 325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/profiles/ingress-addon-legacy-671566/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0229 01:23:06.193655 325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0229 01:23:06.219339 325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0229 01:23:06.244982 325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0229 01:23:06.270173 325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0229 01:23:06.295738 325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/files/etc/ssl/certs/3163362.pem --> /usr/share/ca-certificates/3163362.pem (1708 bytes)
I0229 01:23:06.321222 325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0229 01:23:06.346611 325441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18063-309085/.minikube/certs/316336.pem --> /usr/share/ca-certificates/316336.pem (1338 bytes)
I0229 01:23:06.371502 325441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0229 01:23:06.388956 325441 ssh_runner.go:195] Run: openssl version
I0229 01:23:06.394961 325441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3163362.pem && ln -fs /usr/share/ca-certificates/3163362.pem /etc/ssl/certs/3163362.pem"
I0229 01:23:06.405800 325441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3163362.pem
I0229 01:23:06.410970 325441 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 01:18 /usr/share/ca-certificates/3163362.pem
I0229 01:23:06.411018 325441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3163362.pem
I0229 01:23:06.417130 325441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3163362.pem /etc/ssl/certs/3ec20f2e.0"
I0229 01:23:06.428978 325441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0229 01:23:06.441159 325441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0229 01:23:06.445867 325441 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 01:12 /usr/share/ca-certificates/minikubeCA.pem
I0229 01:23:06.445932 325441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0229 01:23:06.451762 325441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0229 01:23:06.462833 325441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/316336.pem && ln -fs /usr/share/ca-certificates/316336.pem /etc/ssl/certs/316336.pem"
I0229 01:23:06.473740 325441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/316336.pem
I0229 01:23:06.478469 325441 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 01:18 /usr/share/ca-certificates/316336.pem
I0229 01:23:06.478525 325441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/316336.pem
I0229 01:23:06.484277 325441 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/316336.pem /etc/ssl/certs/51391683.0"
I0229 01:23:06.495008 325441 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0229 01:23:06.499296 325441 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0229 01:23:06.499341 325441 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-671566 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-671566 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.248 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0229 01:23:06.499416 325441 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0229 01:23:06.499450 325441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0229 01:23:06.536391 325441 cri.go:89] found id: ""
I0229 01:23:06.536446 325441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0229 01:23:06.546277 325441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0229 01:23:06.555993 325441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0229 01:23:06.565608 325441 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0229 01:23:06.565649 325441 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
I0229 01:23:06.620686 325441 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0229 01:23:06.621022 325441 kubeadm.go:322] [preflight] Running pre-flight checks
I0229 01:23:06.765007 325441 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0229 01:23:06.765164 325441 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0229 01:23:06.765363 325441 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0229 01:23:06.966398 325441 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0229 01:23:06.966966 325441 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0229 01:23:06.967036 325441 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0229 01:23:07.091209 325441 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0229 01:23:07.092973 325441 out.go:204] - Generating certificates and keys ...
I0229 01:23:07.093072 325441 kubeadm.go:322] [certs] Using existing ca certificate authority
I0229 01:23:07.093149 325441 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0229 01:23:07.617656 325441 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0229 01:23:07.772191 325441 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0229 01:23:07.925456 325441 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0229 01:23:08.304209 325441 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0229 01:23:08.516856 325441 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0229 01:23:08.517042 325441 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-671566 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
I0229 01:23:08.731858 325441 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0229 01:23:08.731984 325441 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-671566 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
I0229 01:23:08.995142 325441 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0229 01:23:09.466158 325441 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0229 01:23:09.804992 325441 kubeadm.go:322] [certs] Generating "sa" key and public key
I0229 01:23:09.805062 325441 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0229 01:23:10.060213 325441 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0229 01:23:10.250231 325441 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0229 01:23:10.488582 325441 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0229 01:23:10.697438 325441 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0229 01:23:10.698934 325441 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0229 01:23:10.701519 325441 out.go:204] - Booting up control plane ...
I0229 01:23:10.701633 325441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0229 01:23:10.715870 325441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0229 01:23:10.716024 325441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0229 01:23:10.716154 325441 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0229 01:23:10.720126 325441 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0229 01:23:50.713178 325441 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0229 01:23:50.714172 325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 01:23:50.714416 325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 01:23:55.715121 325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 01:23:55.715394 325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 01:24:05.715374 325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 01:24:05.715640 325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 01:24:25.715438 325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 01:24:25.715623 325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 01:25:05.716749 325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 01:25:05.716987 325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 01:25:05.716997 325441 kubeadm.go:322]
I0229 01:25:05.717040 325441 kubeadm.go:322] Unfortunately, an error has occurred:
I0229 01:25:05.717100 325441 kubeadm.go:322] timed out waiting for the condition
I0229 01:25:05.717114 325441 kubeadm.go:322]
I0229 01:25:05.717161 325441 kubeadm.go:322] This error is likely caused by:
I0229 01:25:05.717235 325441 kubeadm.go:322] - The kubelet is not running
I0229 01:25:05.717377 325441 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0229 01:25:05.717387 325441 kubeadm.go:322]
I0229 01:25:05.717511 325441 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0229 01:25:05.717569 325441 kubeadm.go:322] - 'systemctl status kubelet'
I0229 01:25:05.717616 325441 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0229 01:25:05.717635 325441 kubeadm.go:322]
I0229 01:25:05.717924 325441 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0229 01:25:05.718091 325441 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0229 01:25:05.718114 325441 kubeadm.go:322]
I0229 01:25:05.718246 325441 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
I0229 01:25:05.718386 325441 kubeadm.go:322] - 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
I0229 01:25:05.718497 325441 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0229 01:25:05.718618 325441 kubeadm.go:322] - 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
I0229 01:25:05.718655 325441 kubeadm.go:322]
I0229 01:25:05.718892 325441 kubeadm.go:322] W0229 01:23:06.601612 835 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0229 01:25:05.719046 325441 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0229 01:25:05.719175 325441 kubeadm.go:322] W0229 01:23:10.697008 835 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0229 01:25:05.719289 325441 kubeadm.go:322] W0229 01:23:10.697935 835 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0229 01:25:05.719380 325441 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0229 01:25:05.719480 325441 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0229 01:25:05.719682 325441 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-671566 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-671566 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0229 01:23:06.601612 835 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 01:23:10.697008 835 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 01:23:10.697935 835 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-671566 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-671566 localhost] and IPs [192.168.39.248 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0229 01:23:06.601612 835 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 01:23:10.697008 835 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 01:23:10.697935 835 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0229 01:25:05.719737 325441 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I0229 01:25:06.173667 325441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0229 01:25:06.190163 325441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0229 01:25:06.200567 325441 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0229 01:25:06.200613 325441 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
I0229 01:25:06.257418 325441 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0229 01:25:06.257656 325441 kubeadm.go:322] [preflight] Running pre-flight checks
I0229 01:25:06.398965 325441 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0229 01:25:06.399103 325441 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0229 01:25:06.399211 325441 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0229 01:25:06.609973 325441 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0229 01:25:06.610965 325441 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0229 01:25:06.611019 325441 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0229 01:25:06.746701 325441 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0229 01:25:06.748797 325441 out.go:204] - Generating certificates and keys ...
I0229 01:25:06.748893 325441 kubeadm.go:322] [certs] Using existing ca certificate authority
I0229 01:25:06.749012 325441 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0229 01:25:06.749147 325441 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0229 01:25:06.749261 325441 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0229 01:25:06.749357 325441 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0229 01:25:06.749441 325441 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0229 01:25:06.749542 325441 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0229 01:25:06.749629 325441 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0229 01:25:06.749755 325441 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0229 01:25:06.749871 325441 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0229 01:25:06.749926 325441 kubeadm.go:322] [certs] Using the existing "sa" key
I0229 01:25:06.750025 325441 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0229 01:25:06.930317 325441 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0229 01:25:07.025823 325441 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0229 01:25:07.129158 325441 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0229 01:25:07.264686 325441 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0229 01:25:07.265268 325441 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0229 01:25:07.266936 325441 out.go:204] - Booting up control plane ...
I0229 01:25:07.267080 325441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0229 01:25:07.278268 325441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0229 01:25:07.281411 325441 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0229 01:25:07.282723 325441 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0229 01:25:07.285793 325441 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0229 01:25:47.289469 325441 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0229 01:25:47.290069 325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 01:25:47.290334 325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 01:25:52.291353 325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 01:25:52.291559 325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 01:26:02.292655 325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 01:26:02.292894 325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 01:26:22.291778 325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 01:26:22.292015 325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 01:27:02.291095 325441 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 01:27:02.291349 325441 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 01:27:02.291358 325441 kubeadm.go:322]
I0229 01:27:02.291432 325441 kubeadm.go:322] Unfortunately, an error has occurred:
I0229 01:27:02.291531 325441 kubeadm.go:322] timed out waiting for the condition
I0229 01:27:02.291548 325441 kubeadm.go:322]
I0229 01:27:02.291578 325441 kubeadm.go:322] This error is likely caused by:
I0229 01:27:02.291631 325441 kubeadm.go:322] - The kubelet is not running
I0229 01:27:02.291771 325441 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0229 01:27:02.291780 325441 kubeadm.go:322]
I0229 01:27:02.291906 325441 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0229 01:27:02.291953 325441 kubeadm.go:322] - 'systemctl status kubelet'
I0229 01:27:02.291984 325441 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0229 01:27:02.291992 325441 kubeadm.go:322]
I0229 01:27:02.292127 325441 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0229 01:27:02.292234 325441 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0229 01:27:02.292251 325441 kubeadm.go:322]
I0229 01:27:02.292372 325441 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
I0229 01:27:02.292508 325441 kubeadm.go:322] - 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
I0229 01:27:02.292610 325441 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0229 01:27:02.292721 325441 kubeadm.go:322] - 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
I0229 01:27:02.292732 325441 kubeadm.go:322]
I0229 01:27:02.293199 325441 kubeadm.go:322] W0229 01:25:06.251317 3633 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0229 01:27:02.293339 325441 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0229 01:27:02.293456 325441 kubeadm.go:322] W0229 01:25:07.272412 3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0229 01:27:02.293623 325441 kubeadm.go:322] W0229 01:25:07.275643 3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0229 01:27:02.293732 325441 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0229 01:27:02.293828 325441 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0229 01:27:02.293927 325441 kubeadm.go:406] StartCluster complete in 3m55.794585648s
I0229 01:27:02.294035 325441 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0229 01:27:02.294124 325441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0229 01:27:02.342085 325441 cri.go:89] found id: ""
I0229 01:27:02.342112 325441 logs.go:276] 0 containers: []
W0229 01:27:02.342123 325441 logs.go:278] No container was found matching "kube-apiserver"
I0229 01:27:02.342133 325441 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0229 01:27:02.342200 325441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0229 01:27:02.377552 325441 cri.go:89] found id: ""
I0229 01:27:02.377581 325441 logs.go:276] 0 containers: []
W0229 01:27:02.377592 325441 logs.go:278] No container was found matching "etcd"
I0229 01:27:02.377600 325441 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0229 01:27:02.377671 325441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0229 01:27:02.411797 325441 cri.go:89] found id: ""
I0229 01:27:02.411818 325441 logs.go:276] 0 containers: []
W0229 01:27:02.411825 325441 logs.go:278] No container was found matching "coredns"
I0229 01:27:02.411831 325441 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0229 01:27:02.411877 325441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0229 01:27:02.442887 325441 cri.go:89] found id: ""
I0229 01:27:02.442912 325441 logs.go:276] 0 containers: []
W0229 01:27:02.442922 325441 logs.go:278] No container was found matching "kube-scheduler"
I0229 01:27:02.442928 325441 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0229 01:27:02.442998 325441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0229 01:27:02.503577 325441 cri.go:89] found id: ""
I0229 01:27:02.503604 325441 logs.go:276] 0 containers: []
W0229 01:27:02.503613 325441 logs.go:278] No container was found matching "kube-proxy"
I0229 01:27:02.503619 325441 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0229 01:27:02.503689 325441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0229 01:27:02.557845 325441 cri.go:89] found id: ""
I0229 01:27:02.557881 325441 logs.go:276] 0 containers: []
W0229 01:27:02.557891 325441 logs.go:278] No container was found matching "kube-controller-manager"
I0229 01:27:02.557899 325441 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0229 01:27:02.557956 325441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0229 01:27:02.596567 325441 cri.go:89] found id: ""
I0229 01:27:02.596596 325441 logs.go:276] 0 containers: []
W0229 01:27:02.596606 325441 logs.go:278] No container was found matching "kindnet"
I0229 01:27:02.596620 325441 logs.go:123] Gathering logs for containerd ...
I0229 01:27:02.596672 325441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0229 01:27:02.629878 325441 logs.go:123] Gathering logs for container status ...
I0229 01:27:02.629916 325441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0229 01:27:02.672685 325441 logs.go:123] Gathering logs for kubelet ...
I0229 01:27:02.672719 325441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0229 01:27:02.697224 325441 logs.go:138] Found kubelet problem: Feb 29 01:26:54 ingress-addon-legacy-671566 kubelet[6134]: F0229 01:26:54.305038 6134 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 01:27:02.702331 325441 logs.go:138] Found kubelet problem: Feb 29 01:26:55 ingress-addon-legacy-671566 kubelet[6161]: F0229 01:26:55.554030 6161 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 01:27:02.707679 325441 logs.go:138] Found kubelet problem: Feb 29 01:26:56 ingress-addon-legacy-671566 kubelet[6187]: F0229 01:26:56.762409 6187 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 01:27:02.713016 325441 logs.go:138] Found kubelet problem: Feb 29 01:26:58 ingress-addon-legacy-671566 kubelet[6213]: F0229 01:26:58.011021 6213 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 01:27:02.717824 325441 logs.go:138] Found kubelet problem: Feb 29 01:26:59 ingress-addon-legacy-671566 kubelet[6246]: F0229 01:26:59.297682 6246 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 01:27:02.722645 325441 logs.go:138] Found kubelet problem: Feb 29 01:27:00 ingress-addon-legacy-671566 kubelet[6275]: F0229 01:27:00.535137 6275 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 01:27:02.727451 325441 logs.go:138] Found kubelet problem: Feb 29 01:27:01 ingress-addon-legacy-671566 kubelet[6303]: F0229 01:27:01.826126 6303 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
I0229 01:27:02.730085 325441 logs.go:123] Gathering logs for dmesg ...
I0229 01:27:02.730102 325441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0229 01:27:02.745045 325441 logs.go:123] Gathering logs for describe nodes ...
I0229 01:27:02.745068 325441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0229 01:27:02.809898 325441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
W0229 01:27:02.809966 325441 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0229 01:25:06.251317 3633 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 01:25:07.272412 3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 01:25:07.275643 3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0229 01:27:02.810013 325441 out.go:239] *
*
W0229 01:27:02.810130 325441 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0229 01:25:06.251317 3633 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 01:25:07.272412 3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 01:25:07.275643 3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0229 01:25:06.251317 3633 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 01:25:07.272412 3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 01:25:07.275643 3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0229 01:27:02.810164 325441 out.go:239] *
*
W0229 01:27:02.811097 325441 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0229 01:27:02.813112 325441 out.go:177] X Problems detected in kubelet:
I0229 01:27:02.814535 325441 out.go:177] Feb 29 01:26:54 ingress-addon-legacy-671566 kubelet[6134]: F0229 01:26:54.305038 6134 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
I0229 01:27:02.815713 325441 out.go:177] Feb 29 01:26:55 ingress-addon-legacy-671566 kubelet[6161]: F0229 01:26:55.554030 6161 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
I0229 01:27:02.816773 325441 out.go:177] Feb 29 01:26:56 ingress-addon-legacy-671566 kubelet[6187]: F0229 01:26:56.762409 6187 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
I0229 01:27:02.819477 325441 out.go:177]
W0229 01:27:02.820619 325441 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0229 01:25:06.251317 3633 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 01:25:07.272412 3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 01:25:07.275643 3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
W0229 01:25:06.251317 3633 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 01:25:07.272412 3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 01:25:07.275643 3633 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0229 01:27:02.820675 325441 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0229 01:27:02.820694 325441 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0229 01:27:02.822003 325441 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-linux-amd64 start -p ingress-addon-legacy-671566 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 --container-runtime=containerd" : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (291.04s)