=== RUN TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run: out/minikube-linux-amd64 start -p ingress-addon-legacy-924574 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2
E0229 17:47:53.224104 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 17:50:09.379318 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 17:50:37.066082 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/addons-039717/client.crt: no such file or directory
E0229 17:51:00.470055 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:00.475397 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:00.485734 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:00.506026 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:00.546321 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:00.626705 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:00.787147 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:01.107713 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:01.748097 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:03.028627 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:05.589423 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:10.710450 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:20.950813 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:51:41.431124 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:52:22.392299 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
E0229 17:53:44.315783 13605 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/functional-339868/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ingress-addon-legacy-924574 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : exit status 109 (6m45.217243631s)
-- stdout --
* [ingress-addon-legacy-924574] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=18259
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on user configuration
* Starting control plane node ingress-addon-legacy-924574 in cluster ingress-addon-legacy-924574
* Downloading Kubernetes v1.18.20 preload ...
* Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Generating certificates and keys ...
- Booting up control plane ...
X Problems detected in kubelet:
Feb 29 17:54:24 ingress-addon-legacy-924574 kubelet[51379]: F0229 17:54:24.785892 51379 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
Feb 29 17:54:26 ingress-addon-legacy-924574 kubelet[51554]: F0229 17:54:26.028541 51554 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
Feb 29 17:54:27 ingress-addon-legacy-924574 kubelet[51732]: F0229 17:54:27.250912 51732 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
-- /stdout --
** stderr **
I0229 17:47:48.400479 22364 out.go:291] Setting OutFile to fd 1 ...
I0229 17:47:48.400569 22364 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:47:48.400574 22364 out.go:304] Setting ErrFile to fd 2...
I0229 17:47:48.400582 22364 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 17:47:48.400772 22364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18259-6402/.minikube/bin
I0229 17:47:48.401330 22364 out.go:298] Setting JSON to false
I0229 17:47:48.402202 22364 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1819,"bootTime":1709227050,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1052-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0229 17:47:48.402273 22364 start.go:139] virtualization: kvm guest
I0229 17:47:48.404649 22364 out.go:177] * [ingress-addon-legacy-924574] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
I0229 17:47:48.406356 22364 out.go:177] - MINIKUBE_LOCATION=18259
I0229 17:47:48.406319 22364 notify.go:220] Checking for updates...
I0229 17:47:48.407700 22364 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0229 17:47:48.409197 22364 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/18259-6402/kubeconfig
I0229 17:47:48.410575 22364 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/18259-6402/.minikube
I0229 17:47:48.411886 22364 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0229 17:47:48.413346 22364 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0229 17:47:48.414987 22364 driver.go:392] Setting default libvirt URI to qemu:///system
I0229 17:47:48.448716 22364 out.go:177] * Using the kvm2 driver based on user configuration
I0229 17:47:48.450042 22364 start.go:299] selected driver: kvm2
I0229 17:47:48.450052 22364 start.go:903] validating driver "kvm2" against <nil>
I0229 17:47:48.450062 22364 start.go:914] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0229 17:47:48.450823 22364 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0229 17:47:48.450918 22364 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18259-6402/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0229 17:47:48.465761 22364 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
I0229 17:47:48.465811 22364 start_flags.go:309] no existing cluster config was found, will generate one from the flags
I0229 17:47:48.466033 22364 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0229 17:47:48.466098 22364 cni.go:84] Creating CNI manager for ""
I0229 17:47:48.466117 22364 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0229 17:47:48.466126 22364 start_flags.go:323] config:
{Name:ingress-addon-legacy-924574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-924574 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0229 17:47:48.466253 22364 iso.go:125] acquiring lock: {Name:mk9e2949140cf4fb33ba681841e4205e10738498 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0229 17:47:48.468923 22364 out.go:177] * Starting control plane node ingress-addon-legacy-924574 in cluster ingress-addon-legacy-924574
I0229 17:47:48.470152 22364 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0229 17:47:48.494230 22364 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0229 17:47:48.494259 22364 cache.go:56] Caching tarball of preloaded images
I0229 17:47:48.494407 22364 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0229 17:47:48.496128 22364 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
I0229 17:47:48.497477 22364 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0229 17:47:48.522027 22364 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
I0229 17:47:52.100633 22364 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0229 17:47:52.100743 22364 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
I0229 17:47:52.880821 22364 cache.go:59] Finished verifying existence of preloaded tar for v1.18.20 on docker
I0229 17:47:52.881140 22364 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/config.json ...
I0229 17:47:52.881167 22364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/config.json: {Name:mkf578002dea33b0c8dc25c2275a8c4958179e8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 17:47:52.881347 22364 start.go:365] acquiring machines lock for ingress-addon-legacy-924574: {Name:mk74557154dfda7cafd0db37b211474724c8cf09 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0229 17:47:52.881389 22364 start.go:369] acquired machines lock for "ingress-addon-legacy-924574" in 20.69µs
I0229 17:47:52.881411 22364 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-924574 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-924574 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
I0229 17:47:52.881503 22364 start.go:125] createHost starting for "" (driver="kvm2")
I0229 17:47:52.883763 22364 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I0229 17:47:52.883906 22364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0229 17:47:52.883949 22364 main.go:141] libmachine: Launching plugin server for driver kvm2
I0229 17:47:52.898092 22364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37803
I0229 17:47:52.898549 22364 main.go:141] libmachine: () Calling .GetVersion
I0229 17:47:52.899063 22364 main.go:141] libmachine: Using API Version 1
I0229 17:47:52.899083 22364 main.go:141] libmachine: () Calling .SetConfigRaw
I0229 17:47:52.899437 22364 main.go:141] libmachine: () Calling .GetMachineName
I0229 17:47:52.899601 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetMachineName
I0229 17:47:52.899753 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
I0229 17:47:52.899880 22364 start.go:159] libmachine.API.Create for "ingress-addon-legacy-924574" (driver="kvm2")
I0229 17:47:52.899910 22364 client.go:168] LocalClient.Create starting
I0229 17:47:52.899944 22364 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem
I0229 17:47:52.899981 22364 main.go:141] libmachine: Decoding PEM data...
I0229 17:47:52.900002 22364 main.go:141] libmachine: Parsing certificate...
I0229 17:47:52.900071 22364 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem
I0229 17:47:52.900096 22364 main.go:141] libmachine: Decoding PEM data...
I0229 17:47:52.900115 22364 main.go:141] libmachine: Parsing certificate...
I0229 17:47:52.900140 22364 main.go:141] libmachine: Running pre-create checks...
I0229 17:47:52.900153 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .PreCreateCheck
I0229 17:47:52.900452 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetConfigRaw
I0229 17:47:52.900781 22364 main.go:141] libmachine: Creating machine...
I0229 17:47:52.900795 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .Create
I0229 17:47:52.900895 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Creating KVM machine...
I0229 17:47:52.902080 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found existing default KVM network
I0229 17:47:52.902744 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:52.902622 22398 network.go:207] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1c0}
I0229 17:47:52.907905 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | trying to create private KVM network mk-ingress-addon-legacy-924574 192.168.39.0/24...
I0229 17:47:52.972196 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | private KVM network mk-ingress-addon-legacy-924574 192.168.39.0/24 created
I0229 17:47:52.972224 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Setting up store path in /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574 ...
I0229 17:47:52.972242 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:52.972190 22398 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18259-6402/.minikube
I0229 17:47:52.972261 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Building disk image from file:///home/jenkins/minikube-integration/18259-6402/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso
I0229 17:47:52.972332 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Downloading /home/jenkins/minikube-integration/18259-6402/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18259-6402/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
I0229 17:47:53.190763 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:53.190620 22398 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa...
I0229 17:47:53.367655 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:53.367530 22398 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/ingress-addon-legacy-924574.rawdisk...
I0229 17:47:53.367696 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Writing magic tar header
I0229 17:47:53.367710 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Writing SSH key tar header
I0229 17:47:53.367719 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:53.367669 22398 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574 ...
I0229 17:47:53.367796 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574
I0229 17:47:53.367856 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574 (perms=drwx------)
I0229 17:47:53.367882 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402/.minikube/machines
I0229 17:47:53.367899 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402/.minikube/machines (perms=drwxr-xr-x)
I0229 17:47:53.367913 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402/.minikube
I0229 17:47:53.367930 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18259-6402
I0229 17:47:53.367943 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0229 17:47:53.367957 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Checking permissions on dir: /home/jenkins
I0229 17:47:53.367971 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402/.minikube (perms=drwxr-xr-x)
I0229 17:47:53.367988 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Setting executable bit set on /home/jenkins/minikube-integration/18259-6402 (perms=drwxrwxr-x)
I0229 17:47:53.368001 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0229 17:47:53.368010 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0229 17:47:53.368020 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Creating domain...
I0229 17:47:53.368033 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Checking permissions on dir: /home
I0229 17:47:53.368046 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Skipping /home - not owner
I0229 17:47:53.369094 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) define libvirt domain using xml:
I0229 17:47:53.369109 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <domain type='kvm'>
I0229 17:47:53.369116 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <name>ingress-addon-legacy-924574</name>
I0229 17:47:53.369121 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <memory unit='MiB'>4096</memory>
I0229 17:47:53.369132 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <vcpu>2</vcpu>
I0229 17:47:53.369136 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <features>
I0229 17:47:53.369141 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <acpi/>
I0229 17:47:53.369146 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <apic/>
I0229 17:47:53.369150 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <pae/>
I0229 17:47:53.369154 22364 main.go:141] libmachine: (ingress-addon-legacy-924574)
I0229 17:47:53.369159 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) </features>
I0229 17:47:53.369164 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <cpu mode='host-passthrough'>
I0229 17:47:53.369169 22364 main.go:141] libmachine: (ingress-addon-legacy-924574)
I0229 17:47:53.369173 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) </cpu>
I0229 17:47:53.369179 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <os>
I0229 17:47:53.369193 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <type>hvm</type>
I0229 17:47:53.369211 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <boot dev='cdrom'/>
I0229 17:47:53.369227 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <boot dev='hd'/>
I0229 17:47:53.369234 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <bootmenu enable='no'/>
I0229 17:47:53.369239 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) </os>
I0229 17:47:53.369247 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <devices>
I0229 17:47:53.369253 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <disk type='file' device='cdrom'>
I0229 17:47:53.369267 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <source file='/home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/boot2docker.iso'/>
I0229 17:47:53.369275 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <target dev='hdc' bus='scsi'/>
I0229 17:47:53.369290 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <readonly/>
I0229 17:47:53.369295 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) </disk>
I0229 17:47:53.369308 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <disk type='file' device='disk'>
I0229 17:47:53.369324 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <driver name='qemu' type='raw' cache='default' io='threads' />
I0229 17:47:53.369340 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <source file='/home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/ingress-addon-legacy-924574.rawdisk'/>
I0229 17:47:53.369347 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <target dev='hda' bus='virtio'/>
I0229 17:47:53.369353 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) </disk>
I0229 17:47:53.369359 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <interface type='network'>
I0229 17:47:53.369365 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <source network='mk-ingress-addon-legacy-924574'/>
I0229 17:47:53.369370 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <model type='virtio'/>
I0229 17:47:53.369376 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) </interface>
I0229 17:47:53.369381 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <interface type='network'>
I0229 17:47:53.369392 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <source network='default'/>
I0229 17:47:53.369401 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <model type='virtio'/>
I0229 17:47:53.369407 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) </interface>
I0229 17:47:53.369412 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <serial type='pty'>
I0229 17:47:53.369425 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <target port='0'/>
I0229 17:47:53.369432 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) </serial>
I0229 17:47:53.369438 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <console type='pty'>
I0229 17:47:53.369446 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <target type='serial' port='0'/>
I0229 17:47:53.369452 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) </console>
I0229 17:47:53.369460 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <rng model='virtio'>
I0229 17:47:53.369467 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) <backend model='random'>/dev/random</backend>
I0229 17:47:53.369474 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) </rng>
I0229 17:47:53.369480 22364 main.go:141] libmachine: (ingress-addon-legacy-924574)
I0229 17:47:53.369486 22364 main.go:141] libmachine: (ingress-addon-legacy-924574)
I0229 17:47:53.369491 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) </devices>
I0229 17:47:53.369497 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) </domain>
I0229 17:47:53.369504 22364 main.go:141] libmachine: (ingress-addon-legacy-924574)
I0229 17:47:53.374000 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:67:27:48 in network default
I0229 17:47:53.374513 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Ensuring networks are active...
I0229 17:47:53.374526 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:47:53.375128 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Ensuring network default is active
I0229 17:47:53.375378 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Ensuring network mk-ingress-addon-legacy-924574 is active
I0229 17:47:53.375875 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Getting domain xml...
I0229 17:47:53.376549 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Creating domain...
I0229 17:47:54.558157 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Waiting to get IP...
I0229 17:47:54.558852 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:47:54.559215 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
I0229 17:47:54.559253 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:54.559189 22398 retry.go:31] will retry after 300.043204ms: waiting for machine to come up
I0229 17:47:54.860810 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:47:54.861165 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
I0229 17:47:54.861190 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:54.861124 22398 retry.go:31] will retry after 262.098032ms: waiting for machine to come up
I0229 17:47:55.124489 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:47:55.124864 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
I0229 17:47:55.124895 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:55.124851 22398 retry.go:31] will retry after 448.178434ms: waiting for machine to come up
I0229 17:47:55.574434 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:47:55.574830 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
I0229 17:47:55.574854 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:55.574788 22398 retry.go:31] will retry after 533.788809ms: waiting for machine to come up
I0229 17:47:56.110641 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:47:56.111052 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
I0229 17:47:56.111078 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:56.111001 22398 retry.go:31] will retry after 695.183136ms: waiting for machine to come up
I0229 17:47:56.808182 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:47:56.808548 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
I0229 17:47:56.808573 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:56.808506 22398 retry.go:31] will retry after 775.846643ms: waiting for machine to come up
I0229 17:47:57.585650 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:47:57.586067 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
I0229 17:47:57.586096 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:57.586006 22398 retry.go:31] will retry after 1.082583506s: waiting for machine to come up
I0229 17:47:58.669813 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:47:58.670199 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
I0229 17:47:58.670228 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:58.670143 22398 retry.go:31] will retry after 1.065634662s: waiting for machine to come up
I0229 17:47:59.737054 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:47:59.737554 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
I0229 17:47:59.737587 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:47:59.737483 22398 retry.go:31] will retry after 1.165608856s: waiting for machine to come up
I0229 17:48:00.904729 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:00.905063 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
I0229 17:48:00.905089 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:48:00.905029 22398 retry.go:31] will retry after 1.755378706s: waiting for machine to come up
I0229 17:48:02.662894 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:02.663270 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
I0229 17:48:02.663301 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:48:02.663214 22398 retry.go:31] will retry after 2.878131769s: waiting for machine to come up
I0229 17:48:05.544646 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:05.545053 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
I0229 17:48:05.545084 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:48:05.545002 22398 retry.go:31] will retry after 3.364383273s: waiting for machine to come up
I0229 17:48:08.910792 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:08.911302 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
I0229 17:48:08.911333 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:48:08.911251 22398 retry.go:31] will retry after 2.832000314s: waiting for machine to come up
I0229 17:48:11.746210 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:11.746594 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find current IP address of domain ingress-addon-legacy-924574 in network mk-ingress-addon-legacy-924574
I0229 17:48:11.746625 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | I0229 17:48:11.746539 22398 retry.go:31] will retry after 3.45619964s: waiting for machine to come up
I0229 17:48:15.205428 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:15.205939 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Found IP for machine: 192.168.39.8
I0229 17:48:15.205959 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has current primary IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:15.205965 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Reserving static IP address...
I0229 17:48:15.206304 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-924574", mac: "52:54:00:90:77:95", ip: "192.168.39.8"} in network mk-ingress-addon-legacy-924574
I0229 17:48:15.279036 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Getting to WaitForSSH function...
I0229 17:48:15.279070 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Reserved static IP address: 192.168.39.8
I0229 17:48:15.279083 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Waiting for SSH to be available...
I0229 17:48:15.281683 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:15.282007 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574
I0229 17:48:15.282105 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | unable to find defined IP address of network mk-ingress-addon-legacy-924574 interface with MAC address 52:54:00:90:77:95
I0229 17:48:15.282292 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Using SSH client type: external
I0229 17:48:15.282314 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa (-rw-------)
I0229 17:48:15.282353 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa -p 22] /usr/bin/ssh <nil>}
I0229 17:48:15.282367 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | About to run SSH command:
I0229 17:48:15.282394 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | exit 0
I0229 17:48:15.286152 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | SSH cmd err, output: exit status 255:
I0229 17:48:15.286178 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Error getting ssh command 'exit 0' : ssh command error:
I0229 17:48:15.286285 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | command : exit 0
I0229 17:48:15.286306 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | err : exit status 255
I0229 17:48:15.286320 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | output :
I0229 17:48:18.287257 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Getting to WaitForSSH function...
I0229 17:48:18.289512 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:18.289949 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:18.289975 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:18.290082 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Using SSH client type: external
I0229 17:48:18.290113 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Using SSH private key: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa (-rw-------)
I0229 17:48:18.290143 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa -p 22] /usr/bin/ssh <nil>}
I0229 17:48:18.290157 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | About to run SSH command:
I0229 17:48:18.290179 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | exit 0
I0229 17:48:18.415496 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | SSH cmd err, output: <nil>:
I0229 17:48:18.415778 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) KVM machine creation complete!
I0229 17:48:18.416106 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetConfigRaw
I0229 17:48:18.416613 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
I0229 17:48:18.416832 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
I0229 17:48:18.416990 22364 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0229 17:48:18.417003 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetState
I0229 17:48:18.418310 22364 main.go:141] libmachine: Detecting operating system of created instance...
I0229 17:48:18.418330 22364 main.go:141] libmachine: Waiting for SSH to be available...
I0229 17:48:18.418337 22364 main.go:141] libmachine: Getting to WaitForSSH function...
I0229 17:48:18.418347 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
I0229 17:48:18.420525 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:18.420864 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:18.420895 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:18.421007 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
I0229 17:48:18.421193 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:18.421362 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:18.421503 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
I0229 17:48:18.421690 22364 main.go:141] libmachine: Using SSH client type: native
I0229 17:48:18.421869 22364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.39.8 22 <nil> <nil>}
I0229 17:48:18.421879 22364 main.go:141] libmachine: About to run SSH command:
exit 0
I0229 17:48:18.519376 22364 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0229 17:48:18.519408 22364 main.go:141] libmachine: Detecting the provisioner...
I0229 17:48:18.519419 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
I0229 17:48:18.522184 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:18.522600 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:18.522624 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:18.522778 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
I0229 17:48:18.522974 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:18.523187 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:18.523356 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
I0229 17:48:18.523536 22364 main.go:141] libmachine: Using SSH client type: native
I0229 17:48:18.523738 22364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.39.8 22 <nil> <nil>}
I0229 17:48:18.523752 22364 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0229 17:48:18.620466 22364 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0229 17:48:18.620554 22364 main.go:141] libmachine: found compatible host: buildroot
I0229 17:48:18.620566 22364 main.go:141] libmachine: Provisioning with buildroot...
I0229 17:48:18.620580 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetMachineName
I0229 17:48:18.620874 22364 buildroot.go:166] provisioning hostname "ingress-addon-legacy-924574"
I0229 17:48:18.620905 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetMachineName
I0229 17:48:18.621128 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
I0229 17:48:18.623573 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:18.623947 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:18.623976 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:18.624075 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
I0229 17:48:18.624289 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:18.624444 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:18.624583 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
I0229 17:48:18.624706 22364 main.go:141] libmachine: Using SSH client type: native
I0229 17:48:18.624904 22364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.39.8 22 <nil> <nil>}
I0229 17:48:18.624918 22364 main.go:141] libmachine: About to run SSH command:
sudo hostname ingress-addon-legacy-924574 && echo "ingress-addon-legacy-924574" | sudo tee /etc/hostname
I0229 17:48:18.734459 22364 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-924574
I0229 17:48:18.734483 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
I0229 17:48:18.737191 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:18.737519 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:18.737551 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:18.737782 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
I0229 17:48:18.737981 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:18.738137 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:18.738269 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
I0229 17:48:18.738425 22364 main.go:141] libmachine: Using SSH client type: native
I0229 17:48:18.738589 22364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.39.8 22 <nil> <nil>}
I0229 17:48:18.738608 22364 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\singress-addon-legacy-924574' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-924574/g' /etc/hosts;
else
echo '127.0.1.1 ingress-addon-legacy-924574' | sudo tee -a /etc/hosts;
fi
fi
I0229 17:48:18.844725 22364 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0229 17:48:18.844755 22364 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18259-6402/.minikube CaCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18259-6402/.minikube}
I0229 17:48:18.844789 22364 buildroot.go:174] setting up certificates
I0229 17:48:18.844797 22364 provision.go:83] configureAuth start
I0229 17:48:18.844807 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetMachineName
I0229 17:48:18.845087 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetIP
I0229 17:48:18.847576 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:18.847948 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:18.847984 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:18.848113 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
I0229 17:48:18.850264 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:18.850481 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:18.850505 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:18.850615 22364 provision.go:138] copyHostCerts
I0229 17:48:18.850655 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem
I0229 17:48:18.850690 22364 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem, removing ...
I0229 17:48:18.850722 22364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem
I0229 17:48:18.850796 22364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/ca.pem (1078 bytes)
I0229 17:48:18.850866 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem
I0229 17:48:18.850884 22364 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem, removing ...
I0229 17:48:18.850889 22364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem
I0229 17:48:18.850912 22364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/cert.pem (1123 bytes)
I0229 17:48:18.850952 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem
I0229 17:48:18.850968 22364 exec_runner.go:144] found /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem, removing ...
I0229 17:48:18.850974 22364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem
I0229 17:48:18.850993 22364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18259-6402/.minikube/key.pem (1675 bytes)
I0229 17:48:18.851036 22364 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-924574 san=[192.168.39.8 192.168.39.8 localhost 127.0.0.1 minikube ingress-addon-legacy-924574]
I0229 17:48:18.906404 22364 provision.go:172] copyRemoteCerts
I0229 17:48:18.906458 22364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0229 17:48:18.906480 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
I0229 17:48:18.908930 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:18.909217 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:18.909252 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:18.909398 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
I0229 17:48:18.909551 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:18.909721 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
I0229 17:48:18.909839 22364 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa Username:docker}
I0229 17:48:18.990635 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0229 17:48:18.990719 22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0229 17:48:19.015341 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem -> /etc/docker/server.pem
I0229 17:48:19.015401 22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
I0229 17:48:19.038541 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0229 17:48:19.038614 22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0229 17:48:19.061830 22364 provision.go:86] duration metric: configureAuth took 217.020562ms
I0229 17:48:19.061858 22364 buildroot.go:189] setting minikube options for container-runtime
I0229 17:48:19.062053 22364 config.go:182] Loaded profile config "ingress-addon-legacy-924574": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
I0229 17:48:19.062077 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
I0229 17:48:19.062353 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
I0229 17:48:19.064969 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:19.065252 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:19.065277 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:19.065419 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
I0229 17:48:19.065583 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:19.065755 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:19.065893 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
I0229 17:48:19.066034 22364 main.go:141] libmachine: Using SSH client type: native
I0229 17:48:19.066228 22364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.39.8 22 <nil> <nil>}
I0229 17:48:19.066241 22364 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0229 17:48:19.165237 22364 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0229 17:48:19.165256 22364 buildroot.go:70] root file system type: tmpfs
I0229 17:48:19.165382 22364 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0229 17:48:19.165404 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
I0229 17:48:19.167907 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:19.168242 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:19.168270 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:19.168469 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
I0229 17:48:19.168669 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:19.168867 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:19.168982 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
I0229 17:48:19.169158 22364 main.go:141] libmachine: Using SSH client type: native
I0229 17:48:19.169355 22364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.39.8 22 <nil> <nil>}
I0229 17:48:19.169448 22364 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0229 17:48:19.282964 22364 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0229 17:48:19.283002 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
I0229 17:48:19.285498 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:19.285816 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:19.285855 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:19.285981 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
I0229 17:48:19.286140 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:19.286268 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:19.286372 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
I0229 17:48:19.286534 22364 main.go:141] libmachine: Using SSH client type: native
I0229 17:48:19.286741 22364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.39.8 22 <nil> <nil>}
I0229 17:48:19.286783 22364 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0229 17:48:20.063969 22364 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0229 17:48:20.063996 22364 main.go:141] libmachine: Checking connection to Docker...
I0229 17:48:20.064005 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetURL
I0229 17:48:20.065269 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | Using libvirt version 6000000
I0229 17:48:20.067491 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:20.067830 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:20.067872 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:20.068044 22364 main.go:141] libmachine: Docker is up and running!
I0229 17:48:20.068055 22364 main.go:141] libmachine: Reticulating splines...
I0229 17:48:20.068060 22364 client.go:171] LocalClient.Create took 27.168141318s
I0229 17:48:20.068081 22364 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-924574" took 27.168201447s
I0229 17:48:20.068095 22364 start.go:300] post-start starting for "ingress-addon-legacy-924574" (driver="kvm2")
I0229 17:48:20.068108 22364 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0229 17:48:20.068130 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
I0229 17:48:20.068376 22364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0229 17:48:20.068408 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
I0229 17:48:20.070461 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:20.070818 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:20.070839 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:20.070973 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
I0229 17:48:20.071152 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:20.071327 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
I0229 17:48:20.071468 22364 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa Username:docker}
I0229 17:48:20.150272 22364 ssh_runner.go:195] Run: cat /etc/os-release
I0229 17:48:20.154549 22364 info.go:137] Remote host: Buildroot 2023.02.9
I0229 17:48:20.154574 22364 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6402/.minikube/addons for local assets ...
I0229 17:48:20.154635 22364 filesync.go:126] Scanning /home/jenkins/minikube-integration/18259-6402/.minikube/files for local assets ...
I0229 17:48:20.154748 22364 filesync.go:149] local asset: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem -> 136052.pem in /etc/ssl/certs
I0229 17:48:20.154762 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem -> /etc/ssl/certs/136052.pem
I0229 17:48:20.154841 22364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0229 17:48:20.164147 22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem --> /etc/ssl/certs/136052.pem (1708 bytes)
I0229 17:48:20.189357 22364 start.go:303] post-start completed in 121.249815ms
I0229 17:48:20.189401 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetConfigRaw
I0229 17:48:20.189950 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetIP
I0229 17:48:20.192285 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:20.192647 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:20.192675 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:20.192905 22364 profile.go:148] Saving config to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/config.json ...
I0229 17:48:20.193078 22364 start.go:128] duration metric: createHost completed in 27.311563843s
I0229 17:48:20.193109 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
I0229 17:48:20.195031 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:20.195321 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:20.195341 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:20.195456 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
I0229 17:48:20.195619 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:20.195770 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:20.195928 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
I0229 17:48:20.196081 22364 main.go:141] libmachine: Using SSH client type: native
I0229 17:48:20.196260 22364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d280] 0x82ffe0 <nil> [] 0s} 192.168.39.8 22 <nil> <nil>}
I0229 17:48:20.196276 22364 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0229 17:48:20.292350 22364 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709228900.269966997
I0229 17:48:20.292382 22364 fix.go:206] guest clock: 1709228900.269966997
I0229 17:48:20.292400 22364 fix.go:219] Guest: 2024-02-29 17:48:20.269966997 +0000 UTC Remote: 2024-02-29 17:48:20.193091996 +0000 UTC m=+31.837318159 (delta=76.875001ms)
I0229 17:48:20.292434 22364 fix.go:190] guest clock delta is within tolerance: 76.875001ms
I0229 17:48:20.292448 22364 start.go:83] releasing machines lock for "ingress-addon-legacy-924574", held for 27.411048515s
I0229 17:48:20.292478 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
I0229 17:48:20.292738 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetIP
I0229 17:48:20.295253 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:20.295630 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:20.295675 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:20.295853 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
I0229 17:48:20.296338 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
I0229 17:48:20.296491 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .DriverName
I0229 17:48:20.296573 22364 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0229 17:48:20.296618 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
I0229 17:48:20.296668 22364 ssh_runner.go:195] Run: cat /version.json
I0229 17:48:20.296691 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHHostname
I0229 17:48:20.299222 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:20.299474 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:20.299505 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:20.299529 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:20.299652 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
I0229 17:48:20.299831 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:20.299868 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:20.299891 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:20.300051 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
I0229 17:48:20.300068 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHPort
I0229 17:48:20.300212 22364 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa Username:docker}
I0229 17:48:20.300267 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHKeyPath
I0229 17:48:20.300415 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetSSHUsername
I0229 17:48:20.300532 22364 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18259-6402/.minikube/machines/ingress-addon-legacy-924574/id_rsa Username:docker}
I0229 17:48:20.372910 22364 ssh_runner.go:195] Run: systemctl --version
I0229 17:48:20.398144 22364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0229 17:48:20.404022 22364 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0229 17:48:20.404085 22364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0229 17:48:20.414117 22364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0229 17:48:20.432626 22364 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0229 17:48:20.432658 22364 start.go:475] detecting cgroup driver to use...
I0229 17:48:20.432789 22364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0229 17:48:20.459183 22364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0229 17:48:20.471721 22364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0229 17:48:20.482648 22364 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0229 17:48:20.482707 22364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0229 17:48:20.494638 22364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0229 17:48:20.506324 22364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0229 17:48:20.517854 22364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0229 17:48:20.529520 22364 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0229 17:48:20.541075 22364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0229 17:48:20.552598 22364 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0229 17:48:20.563037 22364 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0229 17:48:20.573341 22364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0229 17:48:20.693104 22364 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0229 17:48:20.718809 22364 start.go:475] detecting cgroup driver to use...
I0229 17:48:20.718906 22364 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0229 17:48:20.734265 22364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0229 17:48:20.749388 22364 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0229 17:48:20.769483 22364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0229 17:48:20.784497 22364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0229 17:48:20.801154 22364 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0229 17:48:20.831855 22364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0229 17:48:20.845834 22364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0229 17:48:20.865525 22364 ssh_runner.go:195] Run: which cri-dockerd
I0229 17:48:20.869664 22364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0229 17:48:20.879404 22364 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0229 17:48:20.896615 22364 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0229 17:48:21.016895 22364 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0229 17:48:21.146479 22364 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0229 17:48:21.146625 22364 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0229 17:48:21.164759 22364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0229 17:48:21.277078 22364 ssh_runner.go:195] Run: sudo systemctl restart docker
I0229 17:48:23.116461 22364 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.839342924s)
I0229 17:48:23.116556 22364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0229 17:48:23.142234 22364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0229 17:48:23.168826 22364 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
I0229 17:48:23.168872 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) Calling .GetIP
I0229 17:48:23.171343 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:23.171609 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:77:95", ip: ""} in network mk-ingress-addon-legacy-924574: {Iface:virbr1 ExpiryTime:2024-02-29 18:48:07 +0000 UTC Type:0 Mac:52:54:00:90:77:95 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:ingress-addon-legacy-924574 Clientid:01:52:54:00:90:77:95}
I0229 17:48:23.171657 22364 main.go:141] libmachine: (ingress-addon-legacy-924574) DBG | domain ingress-addon-legacy-924574 has defined IP address 192.168.39.8 and MAC address 52:54:00:90:77:95 in network mk-ingress-addon-legacy-924574
I0229 17:48:23.171847 22364 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0229 17:48:23.176510 22364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0229 17:48:23.190409 22364 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
I0229 17:48:23.190494 22364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0229 17:48:23.208199 22364 docker.go:685] Got preloaded images:
I0229 17:48:23.208223 22364 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0229 17:48:23.208281 22364 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0229 17:48:23.218614 22364 ssh_runner.go:195] Run: which lz4
I0229 17:48:23.222673 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
I0229 17:48:23.222745 22364 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0229 17:48:23.226775 22364 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0229 17:48:23.226798 22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
I0229 17:48:24.765370 22364 docker.go:649] Took 1.542630 seconds to copy over tarball
I0229 17:48:24.765456 22364 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0229 17:48:27.058565 22364 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.293080317s)
I0229 17:48:27.058590 22364 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0229 17:48:27.098240 22364 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0229 17:48:27.108908 22364 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
I0229 17:48:27.126374 22364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0229 17:48:27.247283 22364 ssh_runner.go:195] Run: sudo systemctl restart docker
I0229 17:48:31.458576 22364 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.211256283s)
I0229 17:48:31.458686 22364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0229 17:48:31.480757 22364 docker.go:685] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.18.20
k8s.gcr.io/kube-apiserver:v1.18.20
k8s.gcr.io/kube-scheduler:v1.18.20
k8s.gcr.io/kube-controller-manager:v1.18.20
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/pause:3.2
k8s.gcr.io/coredns:1.6.7
k8s.gcr.io/etcd:3.4.3-0
-- /stdout --
I0229 17:48:31.480779 22364 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
I0229 17:48:31.480788 22364 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
I0229 17:48:31.482625 22364 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
I0229 17:48:31.482637 22364 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0229 17:48:31.482625 22364 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
I0229 17:48:31.482679 22364 image.go:134] retrieving image: registry.k8s.io/pause:3.2
I0229 17:48:31.482624 22364 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I0229 17:48:31.482633 22364 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
I0229 17:48:31.482633 22364 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
I0229 17:48:31.482634 22364 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
I0229 17:48:31.483250 22364 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0229 17:48:31.483453 22364 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
I0229 17:48:31.483478 22364 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
I0229 17:48:31.483511 22364 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I0229 17:48:31.483453 22364 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
I0229 17:48:31.483454 22364 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
I0229 17:48:31.483506 22364 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
I0229 17:48:31.483800 22364 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
I0229 17:48:31.637956 22364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
I0229 17:48:31.656178 22364 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
I0229 17:48:31.656219 22364 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
I0229 17:48:31.656255 22364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
I0229 17:48:31.662439 22364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
I0229 17:48:31.670481 22364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
I0229 17:48:31.673916 22364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
I0229 17:48:31.675358 22364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
I0229 17:48:31.679378 22364 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I0229 17:48:31.688296 22364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
I0229 17:48:31.692235 22364 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
I0229 17:48:31.692275 22364 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
I0229 17:48:31.692309 22364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
I0229 17:48:31.698545 22364 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
I0229 17:48:31.698582 22364 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
I0229 17:48:31.698615 22364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
I0229 17:48:31.727829 22364 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
I0229 17:48:31.727886 22364 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
I0229 17:48:31.727925 22364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
I0229 17:48:31.729724 22364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
I0229 17:48:31.735523 22364 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
I0229 17:48:31.735558 22364 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
I0229 17:48:31.735595 22364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
I0229 17:48:31.740748 22364 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0229 17:48:31.740787 22364 docker.go:337] Removing image: registry.k8s.io/pause:3.2
I0229 17:48:31.740828 22364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
I0229 17:48:31.757096 22364 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
I0229 17:48:31.759089 22364 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
I0229 17:48:31.794650 22364 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
I0229 17:48:31.794696 22364 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
I0229 17:48:31.794734 22364 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
I0229 17:48:31.794755 22364 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
I0229 17:48:31.798077 22364 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
I0229 17:48:31.802572 22364 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
I0229 17:48:31.816369 22364 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
I0229 17:48:32.049650 22364 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0229 17:48:32.070695 22364 cache_images.go:92] LoadImages completed in 589.892517ms
W0229 17:48:32.070762 22364 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18259-6402/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
I0229 17:48:32.070826 22364 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0229 17:48:32.098167 22364 cni.go:84] Creating CNI manager for ""
I0229 17:48:32.098188 22364 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0229 17:48:32.098207 22364 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0229 17:48:32.098223 22364 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.8 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-924574 NodeName:ingress-addon-legacy-924574 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0229 17:48:32.098348 22364 kubeadm.go:181] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.8
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "ingress-addon-legacy-924574"
kubeletExtraArgs:
node-ip: 192.168.39.8
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.8"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.18.20
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0229 17:48:32.098416 22364 kubeadm.go:976] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-924574 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.8
[Install]
config:
{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-924574 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0229 17:48:32.098471 22364 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
I0229 17:48:32.108993 22364 binaries.go:44] Found k8s binaries, skipping transfer
I0229 17:48:32.109066 22364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0229 17:48:32.119451 22364 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
I0229 17:48:32.136865 22364 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
I0229 17:48:32.154638 22364 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
I0229 17:48:32.172052 22364 ssh_runner.go:195] Run: grep 192.168.39.8 control-plane.minikube.internal$ /etc/hosts
I0229 17:48:32.176244 22364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.8 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0229 17:48:32.189028 22364 certs.go:56] Setting up /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574 for IP: 192.168.39.8
I0229 17:48:32.189061 22364 certs.go:190] acquiring lock for shared ca certs: {Name:mk2b1e0afe2a06bed6008eeccac41dd786e239ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 17:48:32.189235 22364 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key
I0229 17:48:32.189311 22364 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key
I0229 17:48:32.189360 22364 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/client.key
I0229 17:48:32.189379 22364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/client.crt with IP's: []
I0229 17:48:32.377815 22364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/client.crt ...
I0229 17:48:32.377844 22364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/client.crt: {Name:mk320b3274c2bb1527f295851eb825e478f7263b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 17:48:32.378022 22364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/client.key ...
I0229 17:48:32.378041 22364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/client.key: {Name:mk9789f5ea3b75ed9f1801ad0fb12835210feb10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 17:48:32.378148 22364 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.key.8e2e64d5
I0229 17:48:32.378172 22364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.crt.8e2e64d5 with IP's: [192.168.39.8 10.96.0.1 127.0.0.1 10.0.0.1]
I0229 17:48:32.587312 22364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.crt.8e2e64d5 ...
I0229 17:48:32.587344 22364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.crt.8e2e64d5: {Name:mk68443d67bd671a96b725e56ab8e6b1af8d018e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 17:48:32.587515 22364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.key.8e2e64d5 ...
I0229 17:48:32.587532 22364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.key.8e2e64d5: {Name:mk956713b0c102c8329150e00fc994d8a0d1aff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 17:48:32.587661 22364 certs.go:337] copying /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.crt.8e2e64d5 -> /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.crt
I0229 17:48:32.587773 22364 certs.go:341] copying /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.key.8e2e64d5 -> /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.key
I0229 17:48:32.587851 22364 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.key
I0229 17:48:32.587872 22364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.crt with IP's: []
I0229 17:48:32.825207 22364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.crt ...
I0229 17:48:32.825239 22364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.crt: {Name:mk2f4c19c78da2dc09d24a86847ccea004b24dd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 17:48:32.825411 22364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.key ...
I0229 17:48:32.825427 22364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.key: {Name:mk92ee93b316f69f56781dc42d0a9e7568b1ed33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0229 17:48:32.825527 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0229 17:48:32.825548 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0229 17:48:32.825579 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0229 17:48:32.825599 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0229 17:48:32.825614 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0229 17:48:32.825627 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0229 17:48:32.825636 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0229 17:48:32.825645 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0229 17:48:32.825721 22364 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem (1338 bytes)
W0229 17:48:32.825755 22364 certs.go:433] ignoring /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605_empty.pem, impossibly tiny 0 bytes
I0229 17:48:32.825765 22364 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca-key.pem (1675 bytes)
I0229 17:48:32.825800 22364 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/ca.pem (1078 bytes)
I0229 17:48:32.825831 22364 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/cert.pem (1123 bytes)
I0229 17:48:32.825865 22364 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/home/jenkins/minikube-integration/18259-6402/.minikube/certs/key.pem (1675 bytes)
I0229 17:48:32.825922 22364 certs.go:437] found cert: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem (1708 bytes)
I0229 17:48:32.825963 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem -> /usr/share/ca-certificates/13605.pem
I0229 17:48:32.825982 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem -> /usr/share/ca-certificates/136052.pem
I0229 17:48:32.826000 22364 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0229 17:48:32.826623 22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0229 17:48:32.852579 22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0229 17:48:32.876773 22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0229 17:48:32.901419 22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/profiles/ingress-addon-legacy-924574/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0229 17:48:32.926327 22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0229 17:48:32.950585 22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0229 17:48:32.975576 22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0229 17:48:32.999863 22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0229 17:48:33.023940 22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/certs/13605.pem --> /usr/share/ca-certificates/13605.pem (1338 bytes)
I0229 17:48:33.048437 22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/files/etc/ssl/certs/136052.pem --> /usr/share/ca-certificates/136052.pem (1708 bytes)
I0229 17:48:33.072590 22364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18259-6402/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0229 17:48:33.096707 22364 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0229 17:48:33.113652 22364 ssh_runner.go:195] Run: openssl version
I0229 17:48:33.119598 22364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0229 17:48:33.130830 22364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0229 17:48:33.135603 22364 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:38 /usr/share/ca-certificates/minikubeCA.pem
I0229 17:48:33.135683 22364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0229 17:48:33.141413 22364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0229 17:48:33.153340 22364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13605.pem && ln -fs /usr/share/ca-certificates/13605.pem /etc/ssl/certs/13605.pem"
I0229 17:48:33.164947 22364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13605.pem
I0229 17:48:33.169594 22364 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:42 /usr/share/ca-certificates/13605.pem
I0229 17:48:33.169645 22364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13605.pem
I0229 17:48:33.175332 22364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13605.pem /etc/ssl/certs/51391683.0"
I0229 17:48:33.186715 22364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136052.pem && ln -fs /usr/share/ca-certificates/136052.pem /etc/ssl/certs/136052.pem"
I0229 17:48:33.198283 22364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136052.pem
I0229 17:48:33.202935 22364 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:42 /usr/share/ca-certificates/136052.pem
I0229 17:48:33.202987 22364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136052.pem
I0229 17:48:33.208855 22364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136052.pem /etc/ssl/certs/3ec20f2e.0"
I0229 17:48:33.220230 22364 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
I0229 17:48:33.224579 22364 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
I0229 17:48:33.224633 22364 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-924574 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-924574 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0229 17:48:33.224748 22364 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0229 17:48:33.241898 22364 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0229 17:48:33.253109 22364 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0229 17:48:33.263577 22364 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0229 17:48:33.273615 22364 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0229 17:48:33.273663 22364 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
I0229 17:48:33.330498 22364 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0229 17:48:33.330584 22364 kubeadm.go:322] [preflight] Running pre-flight checks
I0229 17:48:33.530811 22364 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0229 17:48:33.530939 22364 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0229 17:48:33.531044 22364 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0229 17:48:33.691581 22364 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0229 17:48:33.692582 22364 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0229 17:48:33.692652 22364 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0229 17:48:33.831035 22364 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0229 17:48:33.833210 22364 out.go:204] - Generating certificates and keys ...
I0229 17:48:33.836016 22364 kubeadm.go:322] [certs] Using existing ca certificate authority
I0229 17:48:33.836135 22364 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0229 17:48:33.970431 22364 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0229 17:48:34.126323 22364 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0229 17:48:34.248746 22364 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0229 17:48:34.392101 22364 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0229 17:48:34.621483 22364 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0229 17:48:34.621794 22364 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-924574 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
I0229 17:48:34.915940 22364 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0229 17:48:34.916152 22364 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-924574 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
I0229 17:48:35.179651 22364 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0229 17:48:35.356070 22364 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0229 17:48:35.483977 22364 kubeadm.go:322] [certs] Generating "sa" key and public key
I0229 17:48:35.484165 22364 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0229 17:48:35.595362 22364 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0229 17:48:35.826575 22364 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0229 17:48:36.056336 22364 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0229 17:48:36.508366 22364 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0229 17:48:36.509059 22364 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0229 17:48:36.511102 22364 out.go:204] - Booting up control plane ...
I0229 17:48:36.511208 22364 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0229 17:48:36.528846 22364 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0229 17:48:36.528978 22364 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0229 17:48:36.531649 22364 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0229 17:48:36.532382 22364 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0229 17:49:16.529546 22364 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0229 17:52:36.530209 22364 kubeadm.go:322]
I0229 17:52:36.530290 22364 kubeadm.go:322] Unfortunately, an error has occurred:
I0229 17:52:36.530330 22364 kubeadm.go:322] timed out waiting for the condition
I0229 17:52:36.530336 22364 kubeadm.go:322]
I0229 17:52:36.530381 22364 kubeadm.go:322] This error is likely caused by:
I0229 17:52:36.530415 22364 kubeadm.go:322] - The kubelet is not running
I0229 17:52:36.530599 22364 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0229 17:52:36.530637 22364 kubeadm.go:322]
I0229 17:52:36.530761 22364 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0229 17:52:36.530815 22364 kubeadm.go:322] - 'systemctl status kubelet'
I0229 17:52:36.530857 22364 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0229 17:52:36.530865 22364 kubeadm.go:322]
I0229 17:52:36.530989 22364 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0229 17:52:36.531092 22364 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0229 17:52:36.531103 22364 kubeadm.go:322]
I0229 17:52:36.531203 22364 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0229 17:52:36.531273 22364 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0229 17:52:36.531360 22364 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0229 17:52:36.531412 22364 kubeadm.go:322] - 'docker logs CONTAINERID'
I0229 17:52:36.531422 22364 kubeadm.go:322]
I0229 17:52:36.532053 22364 kubeadm.go:322] W0229 17:48:33.309569 1367 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0229 17:52:36.532326 22364 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0229 17:52:36.532517 22364 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
I0229 17:52:36.532620 22364 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0229 17:52:36.532740 22364 kubeadm.go:322] W0229 17:48:36.501119 1367 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0229 17:52:36.532846 22364 kubeadm.go:322] W0229 17:48:36.508719 1367 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0229 17:52:36.532917 22364 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0229 17:52:36.532975 22364 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
W0229 17:52:36.533150 22364 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-924574 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-924574 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0229 17:48:33.309569 1367 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 17:48:36.501119 1367 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 17:48:36.508719 1367 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-924574 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-924574 localhost] and IPs [192.168.39.8 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0229 17:48:33.309569 1367 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 17:48:36.501119 1367 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 17:48:36.508719 1367 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
I0229 17:52:36.533211 22364 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0229 17:52:36.965137 22364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0229 17:52:36.980167 22364 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0229 17:52:36.990551 22364 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0229 17:52:36.990595 22364 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
I0229 17:52:37.048475 22364 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
I0229 17:52:37.048535 22364 kubeadm.go:322] [preflight] Running pre-flight checks
I0229 17:52:37.249481 22364 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0229 17:52:37.249596 22364 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0229 17:52:37.249742 22364 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0229 17:52:37.413294 22364 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0229 17:52:37.414225 22364 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0229 17:52:37.414294 22364 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0229 17:52:37.553117 22364 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0229 17:52:37.555853 22364 out.go:204] - Generating certificates and keys ...
I0229 17:52:37.555963 22364 kubeadm.go:322] [certs] Using existing ca certificate authority
I0229 17:52:37.556045 22364 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0229 17:52:37.556263 22364 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I0229 17:52:37.556849 22364 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
I0229 17:52:37.557793 22364 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
I0229 17:52:37.558369 22364 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
I0229 17:52:37.559159 22364 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
I0229 17:52:37.559606 22364 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
I0229 17:52:37.560121 22364 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I0229 17:52:37.560477 22364 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
I0229 17:52:37.560633 22364 kubeadm.go:322] [certs] Using the existing "sa" key
I0229 17:52:37.560688 22364 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0229 17:52:37.707855 22364 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0229 17:52:37.845135 22364 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0229 17:52:37.936691 22364 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0229 17:52:38.087992 22364 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0229 17:52:38.088782 22364 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0229 17:52:38.090680 22364 out.go:204] - Booting up control plane ...
I0229 17:52:38.090788 22364 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0229 17:52:38.095162 22364 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0229 17:52:38.096224 22364 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0229 17:52:38.096961 22364 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0229 17:52:38.100186 22364 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0229 17:53:18.102413 22364 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
I0229 17:53:18.103172 22364 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 17:53:18.103339 22364 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 17:53:23.103926 22364 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 17:53:23.104117 22364 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 17:53:33.104802 22364 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 17:53:33.104995 22364 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 17:53:53.106246 22364 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 17:53:53.106736 22364 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 17:54:33.108552 22364 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
I0229 17:54:33.108793 22364 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0229 17:54:33.108804 22364 kubeadm.go:322]
I0229 17:54:33.108851 22364 kubeadm.go:322] Unfortunately, an error has occurred:
I0229 17:54:33.108922 22364 kubeadm.go:322] timed out waiting for the condition
I0229 17:54:33.108932 22364 kubeadm.go:322]
I0229 17:54:33.108977 22364 kubeadm.go:322] This error is likely caused by:
I0229 17:54:33.109050 22364 kubeadm.go:322] - The kubelet is not running
I0229 17:54:33.109194 22364 kubeadm.go:322] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I0229 17:54:33.109207 22364 kubeadm.go:322]
I0229 17:54:33.109326 22364 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I0229 17:54:33.109374 22364 kubeadm.go:322] - 'systemctl status kubelet'
I0229 17:54:33.109434 22364 kubeadm.go:322] - 'journalctl -xeu kubelet'
I0229 17:54:33.109444 22364 kubeadm.go:322]
I0229 17:54:33.109559 22364 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
I0229 17:54:33.109675 22364 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI.
I0229 17:54:33.109690 22364 kubeadm.go:322]
I0229 17:54:33.109768 22364 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
I0229 17:54:33.109820 22364 kubeadm.go:322] - 'docker ps -a | grep kube | grep -v pause'
I0229 17:54:33.109936 22364 kubeadm.go:322] Once you have found the failing container, you can inspect its logs with:
I0229 17:54:33.109978 22364 kubeadm.go:322] - 'docker logs CONTAINERID'
I0229 17:54:33.109989 22364 kubeadm.go:322]
I0229 17:54:33.110803 22364 kubeadm.go:322] W0229 17:52:37.032934 36106 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I0229 17:54:33.110993 22364 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0229 17:54:33.111182 22364 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
I0229 17:54:33.111338 22364 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0229 17:54:33.111468 22364 kubeadm.go:322] W0229 17:52:38.080047 36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0229 17:54:33.111621 22364 kubeadm.go:322] W0229 17:52:38.081127 36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0229 17:54:33.111738 22364 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
I0229 17:54:33.111832 22364 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
I0229 17:54:33.111921 22364 kubeadm.go:406] StartCluster complete in 5m59.88729152s
I0229 17:54:33.112013 22364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0229 17:54:33.135201 22364 logs.go:276] 0 containers: []
W0229 17:54:33.135249 22364 logs.go:278] No container was found matching "kube-apiserver"
I0229 17:54:33.135300 22364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0229 17:54:33.152819 22364 logs.go:276] 0 containers: []
W0229 17:54:33.152850 22364 logs.go:278] No container was found matching "etcd"
I0229 17:54:33.152909 22364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0229 17:54:33.169889 22364 logs.go:276] 0 containers: []
W0229 17:54:33.169916 22364 logs.go:278] No container was found matching "coredns"
I0229 17:54:33.169968 22364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0229 17:54:33.188070 22364 logs.go:276] 0 containers: []
W0229 17:54:33.188098 22364 logs.go:278] No container was found matching "kube-scheduler"
I0229 17:54:33.188157 22364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0229 17:54:33.205788 22364 logs.go:276] 0 containers: []
W0229 17:54:33.205815 22364 logs.go:278] No container was found matching "kube-proxy"
I0229 17:54:33.205873 22364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0229 17:54:33.232848 22364 logs.go:276] 0 containers: []
W0229 17:54:33.232884 22364 logs.go:278] No container was found matching "kube-controller-manager"
I0229 17:54:33.232945 22364 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0229 17:54:33.254887 22364 logs.go:276] 0 containers: []
W0229 17:54:33.254914 22364 logs.go:278] No container was found matching "kindnet"
I0229 17:54:33.254927 22364 logs.go:123] Gathering logs for Docker ...
I0229 17:54:33.254941 22364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0229 17:54:33.304386 22364 logs.go:123] Gathering logs for container status ...
I0229 17:54:33.304421 22364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0229 17:54:33.400088 22364 logs.go:123] Gathering logs for kubelet ...
I0229 17:54:33.400119 22364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0229 17:54:33.430387 22364 logs.go:138] Found kubelet problem: Feb 29 17:54:24 ingress-addon-legacy-924574 kubelet[51379]: F0229 17:54:24.785892 51379 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 17:54:33.436594 22364 logs.go:138] Found kubelet problem: Feb 29 17:54:26 ingress-addon-legacy-924574 kubelet[51554]: F0229 17:54:26.028541 51554 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 17:54:33.442827 22364 logs.go:138] Found kubelet problem: Feb 29 17:54:27 ingress-addon-legacy-924574 kubelet[51732]: F0229 17:54:27.250912 51732 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 17:54:33.449017 22364 logs.go:138] Found kubelet problem: Feb 29 17:54:28 ingress-addon-legacy-924574 kubelet[51910]: F0229 17:54:28.591907 51910 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 17:54:33.455198 22364 logs.go:138] Found kubelet problem: Feb 29 17:54:30 ingress-addon-legacy-924574 kubelet[52092]: F0229 17:54:30.025996 52092 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 17:54:33.461424 22364 logs.go:138] Found kubelet problem: Feb 29 17:54:31 ingress-addon-legacy-924574 kubelet[52276]: F0229 17:54:31.268886 52276 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
W0229 17:54:33.467607 22364 logs.go:138] Found kubelet problem: Feb 29 17:54:32 ingress-addon-legacy-924574 kubelet[52461]: F0229 17:54:32.515121 52461 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
I0229 17:54:33.468453 22364 logs.go:123] Gathering logs for dmesg ...
I0229 17:54:33.468471 22364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0229 17:54:33.485109 22364 logs.go:123] Gathering logs for describe nodes ...
I0229 17:54:33.485133 22364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0229 17:54:33.550239 22364 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
W0229 17:54:33.550270 22364 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0229 17:52:37.032934 36106 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 17:52:38.080047 36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 17:52:38.081127 36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0229 17:54:33.550314 22364 out.go:239] *
*
W0229 17:54:33.550368 22364 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0229 17:52:37.032934 36106 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 17:52:38.080047 36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 17:52:38.081127 36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0229 17:52:37.032934 36106 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 17:52:38.080047 36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 17:52:38.081127 36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0229 17:54:33.550396 22364 out.go:239] *
*
W0229 17:54:33.551451 22364 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0229 17:54:33.553636 22364 out.go:177] X Problems detected in kubelet:
I0229 17:54:33.554849 22364 out.go:177] Feb 29 17:54:24 ingress-addon-legacy-924574 kubelet[51379]: F0229 17:54:24.785892 51379 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
I0229 17:54:33.556254 22364 out.go:177] Feb 29 17:54:26 ingress-addon-legacy-924574 kubelet[51554]: F0229 17:54:26.028541 51554 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
I0229 17:54:33.557835 22364 out.go:177] Feb 29 17:54:27 ingress-addon-legacy-924574 kubelet[51732]: F0229 17:54:27.250912 51732 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
I0229 17:54:33.560652 22364 out.go:177]
W0229 17:54:33.562037 22364 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0229 17:52:37.032934 36106 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 17:52:38.080047 36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 17:52:38.081127 36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.18.20
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
stderr:
W0229 17:52:37.032934 36106 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0229 17:52:38.080047 36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0229 17:52:38.081127 36106 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
W0229 17:54:33.562091 22364 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W0229 17:54:33.562118 22364 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
* Related issue: https://github.com/kubernetes/minikube/issues/4172
I0229 17:54:33.563828 22364 out.go:177]
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-linux-amd64 start -p ingress-addon-legacy-924574 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (405.28s)