=== RUN TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run: out/minikube-linux-amd64 node list -p ha-735960 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run: out/minikube-linux-amd64 stop -p ha-735960 -v=7 --alsologtostderr
E0701 12:20:33.803031 637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-735960 -v=7 --alsologtostderr: (40.842561302s)
ha_test.go:467: (dbg) Run: out/minikube-linux-amd64 start -p ha-735960 --wait=true -v=7 --alsologtostderr
E0701 12:21:55.724202 637854 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/functional-377045/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-735960 --wait=true -v=7 --alsologtostderr: exit status 90 (1m44.497689423s)
-- stdout --
* [ha-735960] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=19166
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
* Starting "ha-735960" primary control-plane node in "ha-735960" cluster
* Restarting existing kvm2 VM for "ha-735960" ...
* Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
* Enabled addons:
* Starting "ha-735960-m02" control-plane node in "ha-735960" cluster
* Restarting existing kvm2 VM for "ha-735960-m02" ...
* Found network options:
- NO_PROXY=192.168.39.16
-- /stdout --
** stderr **
I0701 12:21:13.996326 652196 out.go:291] Setting OutFile to fd 1 ...
I0701 12:21:13.996600 652196 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 12:21:13.996610 652196 out.go:304] Setting ErrFile to fd 2...
I0701 12:21:13.996615 652196 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 12:21:13.996825 652196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
I0701 12:21:13.997417 652196 out.go:298] Setting JSON to false
I0701 12:21:13.998463 652196 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7412,"bootTime":1719829062,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0701 12:21:13.998525 652196 start.go:139] virtualization: kvm guest
I0701 12:21:14.000967 652196 out.go:177] * [ha-735960] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
I0701 12:21:14.002666 652196 out.go:177] - MINIKUBE_LOCATION=19166
I0701 12:21:14.002690 652196 notify.go:220] Checking for updates...
I0701 12:21:14.005489 652196 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0701 12:21:14.006983 652196 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
I0701 12:21:14.008350 652196 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
I0701 12:21:14.009593 652196 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0701 12:21:14.011091 652196 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0701 12:21:14.012857 652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:21:14.012999 652196 driver.go:392] Setting default libvirt URI to qemu:///system
I0701 12:21:14.013468 652196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 12:21:14.013542 652196 main.go:141] libmachine: Launching plugin server for driver kvm2
I0701 12:21:14.028581 652196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35775
I0701 12:21:14.028967 652196 main.go:141] libmachine: () Calling .GetVersion
I0701 12:21:14.029528 652196 main.go:141] libmachine: Using API Version 1
I0701 12:21:14.029551 652196 main.go:141] libmachine: () Calling .SetConfigRaw
I0701 12:21:14.029916 652196 main.go:141] libmachine: () Calling .GetMachineName
I0701 12:21:14.030116 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:14.065038 652196 out.go:177] * Using the kvm2 driver based on existing profile
I0701 12:21:14.066535 652196 start.go:297] selected driver: kvm2
I0701 12:21:14.066551 652196 start.go:901] validating driver "kvm2" against &{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fal
se efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0701 12:21:14.066723 652196 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0701 12:21:14.067041 652196 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0701 12:21:14.067114 652196 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19166-630650/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0701 12:21:14.082191 652196 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
I0701 12:21:14.082920 652196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0701 12:21:14.082959 652196 cni.go:84] Creating CNI manager for ""
I0701 12:21:14.082966 652196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
I0701 12:21:14.083026 652196 start.go:340] cluster config:
{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false h
elm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0701 12:21:14.083142 652196 iso.go:125] acquiring lock: {Name:mk5c70910f61bc270c83609c48670eaf9d7e0602 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0701 12:21:14.086358 652196 out.go:177] * Starting "ha-735960" primary control-plane node in "ha-735960" cluster
I0701 12:21:14.087757 652196 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0701 12:21:14.087794 652196 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
I0701 12:21:14.087805 652196 cache.go:56] Caching tarball of preloaded images
I0701 12:21:14.087882 652196 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0701 12:21:14.087892 652196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0701 12:21:14.088044 652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
I0701 12:21:14.088232 652196 start.go:360] acquireMachinesLock for ha-735960: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0701 12:21:14.088271 652196 start.go:364] duration metric: took 21.615µs to acquireMachinesLock for "ha-735960"
I0701 12:21:14.088285 652196 start.go:96] Skipping create...Using existing machine configuration
I0701 12:21:14.088293 652196 fix.go:54] fixHost starting:
I0701 12:21:14.088547 652196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 12:21:14.088578 652196 main.go:141] libmachine: Launching plugin server for driver kvm2
I0701 12:21:14.103070 652196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42935
I0701 12:21:14.103508 652196 main.go:141] libmachine: () Calling .GetVersion
I0701 12:21:14.104025 652196 main.go:141] libmachine: Using API Version 1
I0701 12:21:14.104050 652196 main.go:141] libmachine: () Calling .SetConfigRaw
I0701 12:21:14.104424 652196 main.go:141] libmachine: () Calling .GetMachineName
I0701 12:21:14.104649 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:14.104829 652196 main.go:141] libmachine: (ha-735960) Calling .GetState
I0701 12:21:14.106608 652196 fix.go:112] recreateIfNeeded on ha-735960: state=Stopped err=<nil>
I0701 12:21:14.106630 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
W0701 12:21:14.106790 652196 fix.go:138] unexpected machine state, will restart: <nil>
I0701 12:21:14.108833 652196 out.go:177] * Restarting existing kvm2 VM for "ha-735960" ...
I0701 12:21:14.110060 652196 main.go:141] libmachine: (ha-735960) Calling .Start
I0701 12:21:14.110234 652196 main.go:141] libmachine: (ha-735960) Ensuring networks are active...
I0701 12:21:14.110976 652196 main.go:141] libmachine: (ha-735960) Ensuring network default is active
I0701 12:21:14.111299 652196 main.go:141] libmachine: (ha-735960) Ensuring network mk-ha-735960 is active
I0701 12:21:14.111665 652196 main.go:141] libmachine: (ha-735960) Getting domain xml...
I0701 12:21:14.112420 652196 main.go:141] libmachine: (ha-735960) Creating domain...
I0701 12:21:15.307133 652196 main.go:141] libmachine: (ha-735960) Waiting to get IP...
I0701 12:21:15.308062 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:15.308526 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:15.308647 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:15.308493 652224 retry.go:31] will retry after 239.111405ms: waiting for machine to come up
I0701 12:21:15.549211 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:15.549648 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:15.549679 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:15.549597 652224 retry.go:31] will retry after 248.256131ms: waiting for machine to come up
I0701 12:21:15.799054 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:15.799481 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:15.799534 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:15.799422 652224 retry.go:31] will retry after 380.468685ms: waiting for machine to come up
I0701 12:21:16.181969 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:16.182432 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:16.182634 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:16.182540 652224 retry.go:31] will retry after 592.847587ms: waiting for machine to come up
I0701 12:21:16.777393 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:16.777837 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:16.777867 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:16.777790 652224 retry.go:31] will retry after 639.749416ms: waiting for machine to come up
I0701 12:21:17.419540 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:17.419941 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:17.419965 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:17.419916 652224 retry.go:31] will retry after 891.768613ms: waiting for machine to come up
I0701 12:21:18.312967 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:18.313455 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:18.313484 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:18.313399 652224 retry.go:31] will retry after 1.112048412s: waiting for machine to come up
I0701 12:21:19.427190 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:19.427624 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:19.427655 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:19.427568 652224 retry.go:31] will retry after 1.150138437s: waiting for machine to come up
I0701 12:21:20.579868 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:20.580291 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:20.580325 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:20.580216 652224 retry.go:31] will retry after 1.129763596s: waiting for machine to come up
I0701 12:21:21.711416 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:21.711892 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:21.711924 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:21.711831 652224 retry.go:31] will retry after 2.143074349s: waiting for machine to come up
I0701 12:21:23.858081 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:23.858617 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:23.858643 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:23.858578 652224 retry.go:31] will retry after 2.436757856s: waiting for machine to come up
I0701 12:21:26.297727 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:26.298302 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:26.298352 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:26.298269 652224 retry.go:31] will retry after 2.609229165s: waiting for machine to come up
I0701 12:21:28.911224 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:28.911698 652196 main.go:141] libmachine: (ha-735960) Found IP for machine: 192.168.39.16
I0701 12:21:28.911722 652196 main.go:141] libmachine: (ha-735960) Reserving static IP address...
I0701 12:21:28.911731 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has current primary IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:28.912401 652196 main.go:141] libmachine: (ha-735960) Reserved static IP address: 192.168.39.16
I0701 12:21:28.912425 652196 main.go:141] libmachine: (ha-735960) Waiting for SSH to be available...
I0701 12:21:28.912468 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:28.912492 652196 main.go:141] libmachine: (ha-735960) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"}
I0701 12:21:28.912507 652196 main.go:141] libmachine: (ha-735960) DBG | Getting to WaitForSSH function...
I0701 12:21:28.914934 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:28.915448 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:28.915478 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:28.915627 652196 main.go:141] libmachine: (ha-735960) DBG | Using SSH client type: external
I0701 12:21:28.915655 652196 main.go:141] libmachine: (ha-735960) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa (-rw-------)
I0701 12:21:28.915680 652196 main.go:141] libmachine: (ha-735960) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa -p 22] /usr/bin/ssh <nil>}
I0701 12:21:28.915698 652196 main.go:141] libmachine: (ha-735960) DBG | About to run SSH command:
I0701 12:21:28.915730 652196 main.go:141] libmachine: (ha-735960) DBG | exit 0
I0701 12:21:29.042314 652196 main.go:141] libmachine: (ha-735960) DBG | SSH cmd err, output: <nil>:
I0701 12:21:29.042747 652196 main.go:141] libmachine: (ha-735960) Calling .GetConfigRaw
I0701 12:21:29.043414 652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
I0701 12:21:29.046291 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.046689 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.046714 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.046967 652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
I0701 12:21:29.047187 652196 machine.go:94] provisionDockerMachine start ...
I0701 12:21:29.047211 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:29.047467 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:29.049524 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.049899 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.049924 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.050040 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:29.050240 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.050477 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.050669 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:29.050868 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:29.051073 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0701 12:21:29.051086 652196 main.go:141] libmachine: About to run SSH command:
hostname
I0701 12:21:29.166645 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0701 12:21:29.166687 652196 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
I0701 12:21:29.166983 652196 buildroot.go:166] provisioning hostname "ha-735960"
I0701 12:21:29.167013 652196 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
I0701 12:21:29.167232 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:29.169829 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.170228 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.170260 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.170403 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:29.170603 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.170773 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.170913 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:29.171082 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:29.171259 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0701 12:21:29.171270 652196 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-735960 && echo "ha-735960" | sudo tee /etc/hostname
I0701 12:21:29.295697 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960
I0701 12:21:29.295728 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:29.298625 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.299014 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.299041 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.299233 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:29.299434 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.299641 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.299795 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:29.299954 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:29.300149 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0701 12:21:29.300171 652196 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-735960' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960/g' /etc/hosts;
else
echo '127.0.1.1 ha-735960' | sudo tee -a /etc/hosts;
fi
fi
I0701 12:21:29.418489 652196 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0701 12:21:29.418522 652196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
I0701 12:21:29.418577 652196 buildroot.go:174] setting up certificates
I0701 12:21:29.418593 652196 provision.go:84] configureAuth start
I0701 12:21:29.418612 652196 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
I0701 12:21:29.418889 652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
I0701 12:21:29.421815 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.422238 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.422275 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.422477 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:29.424787 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.425187 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.425216 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.425427 652196 provision.go:143] copyHostCerts
I0701 12:21:29.425466 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
I0701 12:21:29.425530 652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
I0701 12:21:29.425542 652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
I0701 12:21:29.425624 652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
I0701 12:21:29.425732 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
I0701 12:21:29.425753 652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
I0701 12:21:29.425758 652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
I0701 12:21:29.425798 652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
I0701 12:21:29.425856 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
I0701 12:21:29.425872 652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
I0701 12:21:29.425877 652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
I0701 12:21:29.425897 652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
I0701 12:21:29.425958 652196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960 san=[127.0.0.1 192.168.39.16 ha-735960 localhost minikube]
I0701 12:21:29.592360 652196 provision.go:177] copyRemoteCerts
I0701 12:21:29.592437 652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0701 12:21:29.592463 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:29.595489 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.595884 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.595908 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.596131 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:29.596356 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.596515 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:29.596646 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
I0701 12:21:29.684124 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
I0701 12:21:29.684214 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0701 12:21:29.707185 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0701 12:21:29.707254 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0701 12:21:29.729605 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0701 12:21:29.729687 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0701 12:21:29.751505 652196 provision.go:87] duration metric: took 332.894756ms to configureAuth
I0701 12:21:29.751536 652196 buildroot.go:189] setting minikube options for container-runtime
I0701 12:21:29.751802 652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:21:29.751834 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:29.752179 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:29.754903 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.755331 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.755367 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.755494 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:29.755709 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.755868 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.756016 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:29.756168 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:29.756341 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0701 12:21:29.756351 652196 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0701 12:21:29.867557 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0701 12:21:29.867582 652196 buildroot.go:70] root file system type: tmpfs
I0701 12:21:29.867738 652196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0701 12:21:29.867768 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:29.870702 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.871111 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.871152 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.871294 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:29.871532 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.871806 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.871989 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:29.872177 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:29.872347 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0701 12:21:29.872410 652196 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0701 12:21:29.995623 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0701 12:21:29.995671 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:29.998574 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.998969 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.999001 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.999184 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:29.999403 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.999598 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.999772 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:29.999916 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:30.000093 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0701 12:21:30.000109 652196 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0701 12:21:31.849411 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0701 12:21:31.849452 652196 machine.go:97] duration metric: took 2.802248138s to provisionDockerMachine
I0701 12:21:31.849473 652196 start.go:293] postStartSetup for "ha-735960" (driver="kvm2")
I0701 12:21:31.849487 652196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0701 12:21:31.849508 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:31.849934 652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0701 12:21:31.849982 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:31.853029 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:31.853464 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:31.853494 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:31.853656 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:31.853877 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:31.854065 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:31.854242 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
I0701 12:21:31.948096 652196 ssh_runner.go:195] Run: cat /etc/os-release
I0701 12:21:31.952493 652196 info.go:137] Remote host: Buildroot 2023.02.9
I0701 12:21:31.952522 652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
I0701 12:21:31.952580 652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
I0701 12:21:31.952654 652196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
I0701 12:21:31.952664 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
I0701 12:21:31.952750 652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0701 12:21:31.962034 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
I0701 12:21:31.985898 652196 start.go:296] duration metric: took 136.407484ms for postStartSetup
I0701 12:21:31.985953 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:31.986287 652196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0701 12:21:31.986316 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:31.988934 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:31.989328 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:31.989359 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:31.989497 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:31.989724 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:31.989863 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:31.990038 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
I0701 12:21:32.076710 652196 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0701 12:21:32.076807 652196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0701 12:21:32.133792 652196 fix.go:56] duration metric: took 18.045488816s for fixHost
I0701 12:21:32.133863 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:32.136703 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:32.137078 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:32.137110 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:32.137321 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:32.137591 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:32.137793 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:32.137963 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:32.138201 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:32.138518 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0701 12:21:32.138541 652196 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0701 12:21:32.254973 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836492.215186729
I0701 12:21:32.255001 652196 fix.go:216] guest clock: 1719836492.215186729
I0701 12:21:32.255007 652196 fix.go:229] Guest: 2024-07-01 12:21:32.215186729 +0000 UTC Remote: 2024-07-01 12:21:32.133836118 +0000 UTC m=+18.172225533 (delta=81.350611ms)
I0701 12:21:32.255027 652196 fix.go:200] guest clock delta is within tolerance: 81.350611ms
I0701 12:21:32.255032 652196 start.go:83] releasing machines lock for "ha-735960", held for 18.166751927s
I0701 12:21:32.255050 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:32.255338 652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
I0701 12:21:32.258091 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:32.258459 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:32.258481 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:32.258679 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:32.259224 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:32.259383 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:32.259520 652196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0701 12:21:32.259564 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:32.259693 652196 ssh_runner.go:195] Run: cat /version.json
I0701 12:21:32.259718 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:32.262127 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:32.262481 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:32.262518 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:32.262538 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:32.262653 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:32.262845 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:32.263031 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:32.263054 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:32.263074 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:32.263215 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
I0701 12:21:32.263229 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:32.263398 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:32.263547 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:32.263699 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
I0701 12:21:32.343012 652196 ssh_runner.go:195] Run: systemctl --version
I0701 12:21:32.428409 652196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0701 12:21:32.433742 652196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0701 12:21:32.433815 652196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0701 12:21:32.449052 652196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0701 12:21:32.449087 652196 start.go:494] detecting cgroup driver to use...
I0701 12:21:32.449338 652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0701 12:21:32.471651 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0701 12:21:32.481832 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0701 12:21:32.491470 652196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0701 12:21:32.491548 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0701 12:21:32.501229 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0701 12:21:32.511119 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0701 12:21:32.520826 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0701 12:21:32.530559 652196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0701 12:21:32.542109 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0701 12:21:32.551821 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0701 12:21:32.561403 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0701 12:21:32.571068 652196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0701 12:21:32.579813 652196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0701 12:21:32.588595 652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 12:21:32.705377 652196 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0701 12:21:32.724169 652196 start.go:494] detecting cgroup driver to use...
I0701 12:21:32.724285 652196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0701 12:21:32.739050 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0701 12:21:32.753169 652196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0701 12:21:32.769805 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0701 12:21:32.783750 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0701 12:21:32.797509 652196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0701 12:21:32.821510 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0701 12:21:32.835901 652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0701 12:21:32.854192 652196 ssh_runner.go:195] Run: which cri-dockerd
I0701 12:21:32.858039 652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0701 12:21:32.867652 652196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0701 12:21:32.884216 652196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0701 12:21:33.001636 652196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0701 12:21:33.121229 652196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0701 12:21:33.121419 652196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0701 12:21:33.138482 652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 12:21:33.262395 652196 ssh_runner.go:195] Run: sudo systemctl restart docker
I0701 12:21:35.714549 652196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.452099351s)
I0701 12:21:35.714642 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0701 12:21:35.727946 652196 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I0701 12:21:35.744089 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0701 12:21:35.757426 652196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0701 12:21:35.868089 652196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0701 12:21:35.989857 652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 12:21:36.121343 652196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0701 12:21:36.138520 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0701 12:21:36.152026 652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 12:21:36.271312 652196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0701 12:21:36.351567 652196 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0701 12:21:36.351668 652196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0701 12:21:36.357143 652196 start.go:562] Will wait 60s for crictl version
I0701 12:21:36.357212 652196 ssh_runner.go:195] Run: which crictl
I0701 12:21:36.361384 652196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0701 12:21:36.400372 652196 start.go:578] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.0.1
RuntimeApiVersion: v1
I0701 12:21:36.400446 652196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0701 12:21:36.427941 652196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0701 12:21:36.456620 652196 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
I0701 12:21:36.456687 652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
I0701 12:21:36.459384 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:36.459752 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:36.459781 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:36.459970 652196 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0701 12:21:36.463956 652196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0701 12:21:36.476676 652196 kubeadm.go:877] updating cluster {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:fa
lse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0701 12:21:36.476851 652196 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0701 12:21:36.476914 652196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0701 12:21:36.493466 652196 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
kindest/kindnetd:v20240513-cd2ac642
ghcr.io/kube-vip/kube-vip:v0.8.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0701 12:21:36.493530 652196 docker.go:615] Images already preloaded, skipping extraction
I0701 12:21:36.493620 652196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0701 12:21:36.510908 652196 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
kindest/kindnetd:v20240513-cd2ac642
ghcr.io/kube-vip/kube-vip:v0.8.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0701 12:21:36.510939 652196 cache_images.go:84] Images are preloaded, skipping loading
I0701 12:21:36.510952 652196 kubeadm.go:928] updating node { 192.168.39.16 8443 v1.30.2 docker true true} ...
I0701 12:21:36.511079 652196 kubeadm.go:940] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.16
[Install]
config:
{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0701 12:21:36.511139 652196 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0701 12:21:36.536408 652196 cni.go:84] Creating CNI manager for ""
I0701 12:21:36.536430 652196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
I0701 12:21:36.536441 652196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0701 12:21:36.536470 652196 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-735960 NodeName:ha-735960 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0701 12:21:36.536633 652196 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.16
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ha-735960"
kubeletExtraArgs:
node-ip: 192.168.39.16
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0701 12:21:36.536656 652196 kube-vip.go:115] generating kube-vip config ...
I0701 12:21:36.536698 652196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0701 12:21:36.551906 652196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0701 12:21:36.552024 652196 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.39.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.0
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0701 12:21:36.552078 652196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
I0701 12:21:36.561989 652196 binaries.go:44] Found k8s binaries, skipping transfer
I0701 12:21:36.562059 652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
I0701 12:21:36.571281 652196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
I0701 12:21:36.587480 652196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0701 12:21:36.603596 652196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
I0701 12:21:36.621063 652196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
I0701 12:21:36.637192 652196 ssh_runner.go:195] Run: grep 192.168.39.254 control-plane.minikube.internal$ /etc/hosts
I0701 12:21:36.640909 652196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0701 12:21:36.652690 652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 12:21:36.768142 652196 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0701 12:21:36.786625 652196 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.16
I0701 12:21:36.786655 652196 certs.go:194] generating shared ca certs ...
I0701 12:21:36.786676 652196 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:21:36.786854 652196 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
I0701 12:21:36.786904 652196 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
I0701 12:21:36.786915 652196 certs.go:256] generating profile certs ...
I0701 12:21:36.787017 652196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key
I0701 12:21:36.787046 652196 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af
I0701 12:21:36.787059 652196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.16 192.168.39.86 192.168.39.97 192.168.39.254]
I0701 12:21:37.059263 652196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af ...
I0701 12:21:37.059305 652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af: {Name:mk1be9dc4667506ac6fdcfb1e313edd1292fe7f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:21:37.059483 652196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af ...
I0701 12:21:37.059496 652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af: {Name:mkf9220e489bd04f035dab270c790bb3448ca6be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:21:37.059596 652196 certs.go:381] copying /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af -> /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt
I0701 12:21:37.059809 652196 certs.go:385] copying /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af -> /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key
I0701 12:21:37.059969 652196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key
I0701 12:21:37.059987 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0701 12:21:37.060000 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0701 12:21:37.060014 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0701 12:21:37.060026 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0701 12:21:37.060038 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0701 12:21:37.060054 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0701 12:21:37.060066 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0701 12:21:37.060077 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0701 12:21:37.060165 652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
W0701 12:21:37.060197 652196 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
I0701 12:21:37.060207 652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
I0701 12:21:37.060228 652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
I0701 12:21:37.060248 652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
I0701 12:21:37.060270 652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
I0701 12:21:37.060305 652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
I0701 12:21:37.060331 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
I0701 12:21:37.060347 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
I0701 12:21:37.060359 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0701 12:21:37.061045 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0701 12:21:37.111708 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0701 12:21:37.168649 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0701 12:21:37.204675 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0701 12:21:37.241167 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
I0701 12:21:37.265225 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0701 12:21:37.288613 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0701 12:21:37.312645 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0701 12:21:37.337494 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
I0701 12:21:37.361044 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
I0701 12:21:37.385424 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0701 12:21:37.409054 652196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0701 12:21:37.426602 652196 ssh_runner.go:195] Run: openssl version
I0701 12:21:37.432129 652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0701 12:21:37.442695 652196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0701 12:21:37.447331 652196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 1 12:05 /usr/share/ca-certificates/minikubeCA.pem
I0701 12:21:37.447415 652196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0701 12:21:37.453215 652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0701 12:21:37.464086 652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
I0701 12:21:37.474527 652196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
I0701 12:21:37.479057 652196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 1 12:11 /usr/share/ca-certificates/637854.pem
I0701 12:21:37.479123 652196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
I0701 12:21:37.484641 652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
I0701 12:21:37.495175 652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
I0701 12:21:37.505961 652196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
I0701 12:21:37.510286 652196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 1 12:11 /usr/share/ca-certificates/6378542.pem
I0701 12:21:37.510365 652196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
I0701 12:21:37.516124 652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
I0701 12:21:37.527154 652196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0701 12:21:37.532024 652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0701 12:21:37.538145 652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0701 12:21:37.544280 652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0701 12:21:37.550448 652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0701 12:21:37.556356 652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0701 12:21:37.562174 652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0701 12:21:37.568144 652196 kubeadm.go:391] StartCluster: {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false
freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0701 12:21:37.568362 652196 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0701 12:21:37.586457 652196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
W0701 12:21:37.596129 652196 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
I0701 12:21:37.596158 652196 kubeadm.go:407] found existing configuration files, will attempt cluster restart
I0701 12:21:37.596164 652196 kubeadm.go:587] restartPrimaryControlPlane start ...
I0701 12:21:37.596237 652196 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0701 12:21:37.605715 652196 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0701 12:21:37.606193 652196 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-735960" does not appear in /home/jenkins/minikube-integration/19166-630650/kubeconfig
I0701 12:21:37.606354 652196 kubeconfig.go:62] /home/jenkins/minikube-integration/19166-630650/kubeconfig needs updating (will repair): [kubeconfig missing "ha-735960" cluster setting kubeconfig missing "ha-735960" context setting]
I0701 12:21:37.606708 652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:21:37.607135 652196 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/19166-630650/kubeconfig
I0701 12:21:37.607365 652196 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0701 12:21:37.607752 652196 cert_rotation.go:137] Starting client certificate rotation controller
I0701 12:21:37.608047 652196 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0701 12:21:37.617685 652196 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.16
I0701 12:21:37.617715 652196 kubeadm.go:591] duration metric: took 21.544408ms to restartPrimaryControlPlane
I0701 12:21:37.617725 652196 kubeadm.go:393] duration metric: took 49.593354ms to StartCluster
I0701 12:21:37.617748 652196 settings.go:142] acquiring lock: {Name:mk6f7c85ea77a73ff0ac851454721f2e6e309153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:21:37.617834 652196 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19166-630650/kubeconfig
I0701 12:21:37.618535 652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:21:37.618754 652196 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0701 12:21:37.618777 652196 start.go:240] waiting for startup goroutines ...
I0701 12:21:37.618792 652196 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0701 12:21:37.619028 652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:21:37.621683 652196 out.go:177] * Enabled addons:
I0701 12:21:37.622979 652196 addons.go:510] duration metric: took 4.192015ms for enable addons: enabled=[]
I0701 12:21:37.623011 652196 start.go:245] waiting for cluster config update ...
I0701 12:21:37.623019 652196 start.go:254] writing updated cluster config ...
I0701 12:21:37.624600 652196 out.go:177]
I0701 12:21:37.626023 652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:21:37.626124 652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
I0701 12:21:37.627745 652196 out.go:177] * Starting "ha-735960-m02" control-plane node in "ha-735960" cluster
I0701 12:21:37.628946 652196 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0701 12:21:37.628969 652196 cache.go:56] Caching tarball of preloaded images
I0701 12:21:37.629060 652196 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0701 12:21:37.629072 652196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0701 12:21:37.629161 652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
I0701 12:21:37.629353 652196 start.go:360] acquireMachinesLock for ha-735960-m02: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0701 12:21:37.629411 652196 start.go:364] duration metric: took 31.79µs to acquireMachinesLock for "ha-735960-m02"
I0701 12:21:37.629427 652196 start.go:96] Skipping create...Using existing machine configuration
I0701 12:21:37.629440 652196 fix.go:54] fixHost starting: m02
I0701 12:21:37.629698 652196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 12:21:37.629747 652196 main.go:141] libmachine: Launching plugin server for driver kvm2
I0701 12:21:37.644981 652196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
I0701 12:21:37.645473 652196 main.go:141] libmachine: () Calling .GetVersion
I0701 12:21:37.645947 652196 main.go:141] libmachine: Using API Version 1
I0701 12:21:37.645969 652196 main.go:141] libmachine: () Calling .SetConfigRaw
I0701 12:21:37.646284 652196 main.go:141] libmachine: () Calling .GetMachineName
I0701 12:21:37.646523 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
I0701 12:21:37.646646 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetState
I0701 12:21:37.648195 652196 fix.go:112] recreateIfNeeded on ha-735960-m02: state=Stopped err=<nil>
I0701 12:21:37.648228 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
W0701 12:21:37.648406 652196 fix.go:138] unexpected machine state, will restart: <nil>
I0701 12:21:37.650489 652196 out.go:177] * Restarting existing kvm2 VM for "ha-735960-m02" ...
I0701 12:21:37.651975 652196 main.go:141] libmachine: (ha-735960-m02) Calling .Start
I0701 12:21:37.652186 652196 main.go:141] libmachine: (ha-735960-m02) Ensuring networks are active...
I0701 12:21:37.652916 652196 main.go:141] libmachine: (ha-735960-m02) Ensuring network default is active
I0701 12:21:37.653282 652196 main.go:141] libmachine: (ha-735960-m02) Ensuring network mk-ha-735960 is active
I0701 12:21:37.653613 652196 main.go:141] libmachine: (ha-735960-m02) Getting domain xml...
I0701 12:21:37.654254 652196 main.go:141] libmachine: (ha-735960-m02) Creating domain...
I0701 12:21:38.852369 652196 main.go:141] libmachine: (ha-735960-m02) Waiting to get IP...
I0701 12:21:38.853358 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:38.853762 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:38.853832 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:38.853747 652384 retry.go:31] will retry after 295.798088ms: waiting for machine to come up
I0701 12:21:39.151332 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:39.151886 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:39.151912 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:39.151845 652384 retry.go:31] will retry after 255.18729ms: waiting for machine to come up
I0701 12:21:39.408310 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:39.408739 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:39.408792 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:39.408689 652384 retry.go:31] will retry after 457.740061ms: waiting for machine to come up
I0701 12:21:39.868295 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:39.868702 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:39.868736 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:39.868629 652384 retry.go:31] will retry after 548.674851ms: waiting for machine to come up
I0701 12:21:40.419597 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:40.420069 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:40.420100 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:40.420009 652384 retry.go:31] will retry after 755.113146ms: waiting for machine to come up
I0701 12:21:41.176960 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:41.177380 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:41.177429 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:41.177309 652384 retry.go:31] will retry after 739.288718ms: waiting for machine to come up
I0701 12:21:41.918305 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:41.918853 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:41.918884 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:41.918789 652384 retry.go:31] will retry after 722.041404ms: waiting for machine to come up
I0701 12:21:42.642704 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:42.643188 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:42.643219 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:42.643113 652384 retry.go:31] will retry after 1.139279839s: waiting for machine to come up
I0701 12:21:43.784719 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:43.785159 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:43.785193 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:43.785114 652384 retry.go:31] will retry after 1.276779849s: waiting for machine to come up
I0701 12:21:45.063522 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:45.064026 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:45.064058 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:45.063969 652384 retry.go:31] will retry after 2.284492799s: waiting for machine to come up
I0701 12:21:47.351530 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:47.352076 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:47.352113 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:47.351988 652384 retry.go:31] will retry after 2.171521184s: waiting for machine to come up
I0701 12:21:49.526162 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:49.526566 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:49.526590 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:49.526523 652384 retry.go:31] will retry after 3.533181759s: waiting for machine to come up
I0701 12:21:53.061482 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.062025 652196 main.go:141] libmachine: (ha-735960-m02) Found IP for machine: 192.168.39.86
I0701 12:21:53.062048 652196 main.go:141] libmachine: (ha-735960-m02) Reserving static IP address...
I0701 12:21:53.062060 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has current primary IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.062473 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.062504 652196 main.go:141] libmachine: (ha-735960-m02) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"}
I0701 12:21:53.062534 652196 main.go:141] libmachine: (ha-735960-m02) Reserved static IP address: 192.168.39.86
I0701 12:21:53.062554 652196 main.go:141] libmachine: (ha-735960-m02) Waiting for SSH to be available...
I0701 12:21:53.062566 652196 main.go:141] libmachine: (ha-735960-m02) DBG | Getting to WaitForSSH function...
I0701 12:21:53.064461 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.064796 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.064828 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.064893 652196 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH client type: external
I0701 12:21:53.064938 652196 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa (-rw-------)
I0701 12:21:53.064965 652196 main.go:141] libmachine: (ha-735960-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0701 12:21:53.064981 652196 main.go:141] libmachine: (ha-735960-m02) DBG | About to run SSH command:
I0701 12:21:53.065000 652196 main.go:141] libmachine: (ha-735960-m02) DBG | exit 0
I0701 12:21:53.190266 652196 main.go:141] libmachine: (ha-735960-m02) DBG | SSH cmd err, output: <nil>:
I0701 12:21:53.190636 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetConfigRaw
I0701 12:21:53.191272 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
I0701 12:21:53.193658 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.193994 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.194027 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.194274 652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
I0701 12:21:53.194544 652196 machine.go:94] provisionDockerMachine start ...
I0701 12:21:53.194562 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
I0701 12:21:53.194814 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:53.196894 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.197262 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.197291 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.197414 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:53.197654 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:53.197829 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:53.198021 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:53.198185 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:53.198432 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.86 22 <nil> <nil>}
I0701 12:21:53.198448 652196 main.go:141] libmachine: About to run SSH command:
hostname
I0701 12:21:53.306480 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0701 12:21:53.306526 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
I0701 12:21:53.306839 652196 buildroot.go:166] provisioning hostname "ha-735960-m02"
I0701 12:21:53.306870 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
I0701 12:21:53.307063 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:53.309645 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.310086 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.310116 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.310307 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:53.310514 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:53.310689 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:53.310820 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:53.310997 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:53.311210 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.86 22 <nil> <nil>}
I0701 12:21:53.311225 652196 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-735960-m02 && echo "ha-735960-m02" | sudo tee /etc/hostname
I0701 12:21:53.434956 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960-m02
I0701 12:21:53.434992 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:53.437612 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.438016 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.438040 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.438190 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:53.438418 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:53.438601 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:53.438768 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:53.438926 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:53.439106 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.86 22 <nil> <nil>}
I0701 12:21:53.439128 652196 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-735960-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960-m02/g' /etc/hosts;
else
echo '127.0.1.1 ha-735960-m02' | sudo tee -a /etc/hosts;
fi
fi
I0701 12:21:53.559115 652196 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0701 12:21:53.559146 652196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
I0701 12:21:53.559163 652196 buildroot.go:174] setting up certificates
I0701 12:21:53.559174 652196 provision.go:84] configureAuth start
I0701 12:21:53.559186 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
I0701 12:21:53.559514 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
I0701 12:21:53.562119 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.562516 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.562550 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.562753 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:53.564741 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.565063 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.565082 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.565233 652196 provision.go:143] copyHostCerts
I0701 12:21:53.565266 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
I0701 12:21:53.565309 652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
I0701 12:21:53.565318 652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
I0701 12:21:53.565379 652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
I0701 12:21:53.565450 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
I0701 12:21:53.565468 652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
I0701 12:21:53.565474 652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
I0701 12:21:53.565492 652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
I0701 12:21:53.565533 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
I0701 12:21:53.565549 652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
I0701 12:21:53.565555 652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
I0701 12:21:53.565570 652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
I0701 12:21:53.565618 652196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960-m02 san=[127.0.0.1 192.168.39.86 ha-735960-m02 localhost minikube]
I0701 12:21:53.749696 652196 provision.go:177] copyRemoteCerts
I0701 12:21:53.749755 652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0701 12:21:53.749780 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:53.752460 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.752780 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.752813 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.752952 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:53.753159 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:53.753385 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:53.753547 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
I0701 12:21:53.835990 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0701 12:21:53.836060 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0701 12:21:53.858665 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
I0701 12:21:53.858753 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0701 12:21:53.880281 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0701 12:21:53.880367 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0701 12:21:53.902677 652196 provision.go:87] duration metric: took 343.48703ms to configureAuth
I0701 12:21:53.902709 652196 buildroot.go:189] setting minikube options for container-runtime
I0701 12:21:53.903020 652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:21:53.903053 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
I0701 12:21:53.903351 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:53.905929 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.906189 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.906216 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.906438 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:53.906667 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:53.906826 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:53.906966 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:53.907119 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:53.907282 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.86 22 <nil> <nil>}
I0701 12:21:53.907294 652196 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0701 12:21:54.019474 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0701 12:21:54.019501 652196 buildroot.go:70] root file system type: tmpfs
I0701 12:21:54.019656 652196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0701 12:21:54.019681 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:54.022816 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:54.023184 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:54.023208 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:54.023371 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:54.023579 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:54.023787 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:54.023946 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:54.024146 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:54.024319 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.86 22 <nil> <nil>}
I0701 12:21:54.024384 652196 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.39.16"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0701 12:21:54.147740 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.39.16
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0701 12:21:54.147778 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:54.150547 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:54.151173 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:54.151208 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:54.151345 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:54.151561 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:54.151771 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:54.151918 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:54.152095 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:54.152266 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.86 22 <nil> <nil>}
I0701 12:21:54.152281 652196 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0701 12:21:56.028628 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0701 12:21:56.028682 652196 machine.go:97] duration metric: took 2.834118436s to provisionDockerMachine
I0701 12:21:56.028701 652196 start.go:293] postStartSetup for "ha-735960-m02" (driver="kvm2")
I0701 12:21:56.028716 652196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0701 12:21:56.028738 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
I0701 12:21:56.029099 652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0701 12:21:56.029132 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:56.031882 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.032264 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:56.032289 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.032433 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:56.032608 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:56.032817 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:56.032971 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
I0701 12:21:56.117309 652196 ssh_runner.go:195] Run: cat /etc/os-release
I0701 12:21:56.121231 652196 info.go:137] Remote host: Buildroot 2023.02.9
I0701 12:21:56.121263 652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
I0701 12:21:56.121324 652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
I0701 12:21:56.121391 652196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
I0701 12:21:56.121402 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
I0701 12:21:56.121478 652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0701 12:21:56.130302 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
I0701 12:21:56.152776 652196 start.go:296] duration metric: took 124.058691ms for postStartSetup
I0701 12:21:56.152821 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
I0701 12:21:56.153142 652196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0701 12:21:56.153170 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:56.155689 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.156094 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:56.156120 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.156332 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:56.156555 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:56.156727 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:56.156917 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
I0701 12:21:56.240391 652196 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0701 12:21:56.240454 652196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0701 12:21:56.280843 652196 fix.go:56] duration metric: took 18.651393475s for fixHost
I0701 12:21:56.280895 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:56.283268 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.283590 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:56.283617 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.283860 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:56.284107 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:56.284307 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:56.284501 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:56.284686 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:56.284888 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.86 22 <nil> <nil>}
I0701 12:21:56.284903 652196 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0701 12:21:56.398873 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836516.359963406
I0701 12:21:56.398893 652196 fix.go:216] guest clock: 1719836516.359963406
I0701 12:21:56.398901 652196 fix.go:229] Guest: 2024-07-01 12:21:56.359963406 +0000 UTC Remote: 2024-07-01 12:21:56.280872467 +0000 UTC m=+42.319261894 (delta=79.090939ms)
I0701 12:21:56.398919 652196 fix.go:200] guest clock delta is within tolerance: 79.090939ms
I0701 12:21:56.398924 652196 start.go:83] releasing machines lock for "ha-735960-m02", held for 18.769503298s
I0701 12:21:56.398940 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
I0701 12:21:56.399198 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
I0701 12:21:56.401982 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.402404 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:56.402436 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.404680 652196 out.go:177] * Found network options:
I0701 12:21:56.406167 652196 out.go:177] - NO_PROXY=192.168.39.16
W0701 12:21:56.407620 652196 proxy.go:119] fail to check proxy env: Error ip not in block
I0701 12:21:56.407664 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
I0701 12:21:56.408285 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
I0701 12:21:56.408498 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
I0701 12:21:56.408606 652196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0701 12:21:56.408647 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
W0701 12:21:56.408741 652196 proxy.go:119] fail to check proxy env: Error ip not in block
I0701 12:21:56.408826 652196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0701 12:21:56.408849 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:56.411170 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.411559 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:56.411598 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.411651 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.411933 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:56.412130 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:56.412221 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:56.412247 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.412295 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:56.412519 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:56.412508 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
I0701 12:21:56.412720 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:56.412871 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:56.412987 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
W0701 12:21:56.492511 652196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0701 12:21:56.492595 652196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0701 12:21:56.515270 652196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0701 12:21:56.515305 652196 start.go:494] detecting cgroup driver to use...
I0701 12:21:56.515419 652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0701 12:21:56.549004 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0701 12:21:56.560711 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0701 12:21:56.578763 652196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0701 12:21:56.578832 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0701 12:21:56.589742 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0701 12:21:56.606645 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0701 12:21:56.620036 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0701 12:21:56.632033 652196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0701 12:21:56.642458 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0701 12:21:56.653078 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0701 12:21:56.663035 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0701 12:21:56.673203 652196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0701 12:21:56.682348 652196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0701 12:21:56.691388 652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 12:21:56.798709 652196 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0701 12:21:56.821386 652196 start.go:494] detecting cgroup driver to use...
I0701 12:21:56.821493 652196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0701 12:21:56.841303 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0701 12:21:56.857934 652196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0701 12:21:56.877318 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0701 12:21:56.889777 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0701 12:21:56.901844 652196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0701 12:21:56.927595 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0701 12:21:56.940849 652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0701 12:21:56.958116 652196 ssh_runner.go:195] Run: which cri-dockerd
I0701 12:21:56.961664 652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0701 12:21:56.969985 652196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0701 12:21:56.985048 652196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0701 12:21:57.096072 652196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0701 12:21:57.211289 652196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0701 12:21:57.211354 652196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0701 12:21:57.227069 652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 12:21:57.341292 652196 ssh_runner.go:195] Run: sudo systemctl restart docker
I0701 12:22:58.423195 652196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.08185799s)
I0701 12:22:58.423268 652196 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0701 12:22:58.444321 652196 out.go:177]
W0701 12:22:58.445678 652196 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
Jul 01 12:21:54 ha-735960-m02 systemd[1]: Starting Docker Application Container Engine...
Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.524329635Z" level=info msg="Starting up"
Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.525054987Z" level=info msg="containerd not running, starting managed containerd"
Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.525787354Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=513
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.553695593Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572290393Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572432449Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572518940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572558429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572981597Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573093539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573355911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573425452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573469593Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573505057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573782642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.574848351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.576951334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577031827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577253828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577304329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577551634Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577624370Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577665230Z" level=info msg="metadata content store policy set" policy=shared
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.580979416Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581128476Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581284824Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581371031Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581432559Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581524784Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581996275Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582118070Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582162131Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582245548Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582319648Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582368655Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582407448Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582445279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582484550Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582521928Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582558472Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582601035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582656126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582693985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582741537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582779033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582815513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582853076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582892671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582938669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582980248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583032987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583083364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583122445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583161506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583262727Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583333396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583373579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583414811Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583520612Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583751718Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583800626Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583838317Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583874340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583912430Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583991424Z" level=info msg="NRI interface is disabled by configuration."
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584364167Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584467963Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584654486Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584785754Z" level=info msg="containerd successfully booted in 0.032655s"
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.555699119Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.620790434Z" level=info msg="Loading containers: start."
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.813021303Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.888534738Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.940299653Z" level=info msg="Loading containers: done."
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.956534314Z" level=info msg="Docker daemon" commit=ff1e2c0 containerd-snapshotter=false storage-driver=overlay2 version=27.0.1
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.956851438Z" level=info msg="Daemon has completed initialization"
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.988054435Z" level=info msg="API listen on /var/run/docker.sock"
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.988129188Z" level=info msg="API listen on [::]:2376"
Jul 01 12:21:55 ha-735960-m02 systemd[1]: Started Docker Application Container Engine.
Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.316115209Z" level=info msg="Processing signal 'terminated'"
Jul 01 12:21:57 ha-735960-m02 systemd[1]: Stopping Docker Application Container Engine...
Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317321834Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317386191Z" level=info msg="Daemon shutdown complete"
Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317447382Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317464543Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jul 01 12:21:58 ha-735960-m02 systemd[1]: docker.service: Deactivated successfully.
Jul 01 12:21:58 ha-735960-m02 systemd[1]: Stopped Docker Application Container Engine.
Jul 01 12:21:58 ha-735960-m02 systemd[1]: Starting Docker Application Container Engine...
Jul 01 12:21:58 ha-735960-m02 dockerd[1188]: time="2024-07-01T12:21:58.364754006Z" level=info msg="Starting up"
Jul 01 12:22:58 ha-735960-m02 dockerd[1188]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 01 12:22:58 ha-735960-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 01 12:22:58 ha-735960-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 01 12:22:58 ha-735960-m02 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
Jul 01 12:21:54 ha-735960-m02 systemd[1]: Starting Docker Application Container Engine...
Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.524329635Z" level=info msg="Starting up"
Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.525054987Z" level=info msg="containerd not running, starting managed containerd"
Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.525787354Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=513
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.553695593Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572290393Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572432449Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572518940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572558429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572981597Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573093539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573355911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573425452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573469593Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573505057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573782642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.574848351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.576951334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577031827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577253828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577304329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577551634Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577624370Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577665230Z" level=info msg="metadata content store policy set" policy=shared
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.580979416Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581128476Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581284824Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581371031Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581432559Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581524784Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581996275Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582118070Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582162131Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582245548Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582319648Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582368655Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582407448Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582445279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582484550Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582521928Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582558472Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582601035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582656126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582693985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582741537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582779033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582815513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582853076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582892671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582938669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582980248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583032987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583083364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583122445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583161506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583262727Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583333396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583373579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583414811Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583520612Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583751718Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583800626Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583838317Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583874340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583912430Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583991424Z" level=info msg="NRI interface is disabled by configuration."
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584364167Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584467963Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584654486Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584785754Z" level=info msg="containerd successfully booted in 0.032655s"
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.555699119Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.620790434Z" level=info msg="Loading containers: start."
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.813021303Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.888534738Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.940299653Z" level=info msg="Loading containers: done."
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.956534314Z" level=info msg="Docker daemon" commit=ff1e2c0 containerd-snapshotter=false storage-driver=overlay2 version=27.0.1
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.956851438Z" level=info msg="Daemon has completed initialization"
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.988054435Z" level=info msg="API listen on /var/run/docker.sock"
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.988129188Z" level=info msg="API listen on [::]:2376"
Jul 01 12:21:55 ha-735960-m02 systemd[1]: Started Docker Application Container Engine.
Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.316115209Z" level=info msg="Processing signal 'terminated'"
Jul 01 12:21:57 ha-735960-m02 systemd[1]: Stopping Docker Application Container Engine...
Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317321834Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317386191Z" level=info msg="Daemon shutdown complete"
Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317447382Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317464543Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jul 01 12:21:58 ha-735960-m02 systemd[1]: docker.service: Deactivated successfully.
Jul 01 12:21:58 ha-735960-m02 systemd[1]: Stopped Docker Application Container Engine.
Jul 01 12:21:58 ha-735960-m02 systemd[1]: Starting Docker Application Container Engine...
Jul 01 12:21:58 ha-735960-m02 dockerd[1188]: time="2024-07-01T12:21:58.364754006Z" level=info msg="Starting up"
Jul 01 12:22:58 ha-735960-m02 dockerd[1188]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 01 12:22:58 ha-735960-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 01 12:22:58 ha-735960-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 01 12:22:58 ha-735960-m02 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
W0701 12:22:58.445741 652196 out.go:239] *
*
W0701 12:22:58.447325 652196 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0701 12:22:58.449434 652196 out.go:177]
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-735960 -v=7 --alsologtostderr" : exit status 90
ha_test.go:472: (dbg) Run: out/minikube-linux-amd64 node list -p ha-735960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p ha-735960 -n ha-735960
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-735960 -n ha-735960: exit status 2 (231.983714ms)
-- stdout --
Running
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p ha-735960 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs:
-- stdout --
==> Audit <==
|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
| cp | ha-735960 cp ha-735960-m03:/home/docker/cp-test.txt | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | ha-735960-m02:/home/docker/cp-test_ha-735960-m03_ha-735960-m02.txt | | | | | |
| ssh | ha-735960 ssh -n | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | ha-735960-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-735960 ssh -n ha-735960-m02 sudo cat | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | /home/docker/cp-test_ha-735960-m03_ha-735960-m02.txt | | | | | |
| cp | ha-735960 cp ha-735960-m03:/home/docker/cp-test.txt | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | ha-735960-m04:/home/docker/cp-test_ha-735960-m03_ha-735960-m04.txt | | | | | |
| ssh | ha-735960 ssh -n | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | ha-735960-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-735960 ssh -n ha-735960-m04 sudo cat | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | /home/docker/cp-test_ha-735960-m03_ha-735960-m04.txt | | | | | |
| cp | ha-735960 cp testdata/cp-test.txt | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | ha-735960-m04:/home/docker/cp-test.txt | | | | | |
| ssh | ha-735960 ssh -n | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | ha-735960-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | /tmp/TestMultiControlPlaneserialCopyFile2826819896/001/cp-test_ha-735960-m04.txt | | | | | |
| ssh | ha-735960 ssh -n | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | ha-735960-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | ha-735960:/home/docker/cp-test_ha-735960-m04_ha-735960.txt | | | | | |
| ssh | ha-735960 ssh -n | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | ha-735960-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-735960 ssh -n ha-735960 sudo cat | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | /home/docker/cp-test_ha-735960-m04_ha-735960.txt | | | | | |
| cp | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | ha-735960-m02:/home/docker/cp-test_ha-735960-m04_ha-735960-m02.txt | | | | | |
| ssh | ha-735960 ssh -n | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | ha-735960-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-735960 ssh -n ha-735960-m02 sudo cat | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | /home/docker/cp-test_ha-735960-m04_ha-735960-m02.txt | | | | | |
| cp | ha-735960 cp ha-735960-m04:/home/docker/cp-test.txt | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | ha-735960-m03:/home/docker/cp-test_ha-735960-m04_ha-735960-m03.txt | | | | | |
| ssh | ha-735960 ssh -n | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | ha-735960-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-735960 ssh -n ha-735960-m03 sudo cat | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | /home/docker/cp-test_ha-735960-m04_ha-735960-m03.txt | | | | | |
| node | ha-735960 node stop m02 -v=7 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:19 UTC |
| | --alsologtostderr | | | | | |
| node | ha-735960 node start m02 -v=7 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:19 UTC | 01 Jul 24 12:20 UTC |
| | --alsologtostderr | | | | | |
| node | list -p ha-735960 -v=7 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:20 UTC | |
| | --alsologtostderr | | | | | |
| stop | -p ha-735960 -v=7 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:20 UTC | 01 Jul 24 12:21 UTC |
| | --alsologtostderr | | | | | |
| start | -p ha-735960 --wait=true -v=7 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:21 UTC | |
| | --alsologtostderr | | | | | |
| node | list -p ha-735960 | ha-735960 | jenkins | v1.33.1 | 01 Jul 24 12:22 UTC | |
|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/07/01 12:21:13
Running on machine: ubuntu-20-agent-7
Binary: Built with gc go1.22.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0701 12:21:13.996326 652196 out.go:291] Setting OutFile to fd 1 ...
I0701 12:21:13.996600 652196 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 12:21:13.996610 652196 out.go:304] Setting ErrFile to fd 2...
I0701 12:21:13.996615 652196 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 12:21:13.996825 652196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-630650/.minikube/bin
I0701 12:21:13.997417 652196 out.go:298] Setting JSON to false
I0701 12:21:13.998463 652196 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7412,"bootTime":1719829062,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0701 12:21:13.998525 652196 start.go:139] virtualization: kvm guest
I0701 12:21:14.000967 652196 out.go:177] * [ha-735960] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
I0701 12:21:14.002666 652196 out.go:177] - MINIKUBE_LOCATION=19166
I0701 12:21:14.002690 652196 notify.go:220] Checking for updates...
I0701 12:21:14.005489 652196 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0701 12:21:14.006983 652196 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19166-630650/kubeconfig
I0701 12:21:14.008350 652196 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-630650/.minikube
I0701 12:21:14.009593 652196 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0701 12:21:14.011091 652196 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0701 12:21:14.012857 652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:21:14.012999 652196 driver.go:392] Setting default libvirt URI to qemu:///system
I0701 12:21:14.013468 652196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 12:21:14.013542 652196 main.go:141] libmachine: Launching plugin server for driver kvm2
I0701 12:21:14.028581 652196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35775
I0701 12:21:14.028967 652196 main.go:141] libmachine: () Calling .GetVersion
I0701 12:21:14.029528 652196 main.go:141] libmachine: Using API Version 1
I0701 12:21:14.029551 652196 main.go:141] libmachine: () Calling .SetConfigRaw
I0701 12:21:14.029916 652196 main.go:141] libmachine: () Calling .GetMachineName
I0701 12:21:14.030116 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:14.065038 652196 out.go:177] * Using the kvm2 driver based on existing profile
I0701 12:21:14.066535 652196 start.go:297] selected driver: kvm2
I0701 12:21:14.066551 652196 start.go:901] validating driver "kvm2" against &{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fal
se efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0701 12:21:14.066723 652196 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0701 12:21:14.067041 652196 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0701 12:21:14.067114 652196 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19166-630650/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0701 12:21:14.082191 652196 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
I0701 12:21:14.082920 652196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0701 12:21:14.082959 652196 cni.go:84] Creating CNI manager for ""
I0701 12:21:14.082966 652196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
I0701 12:21:14.083026 652196 start.go:340] cluster config:
{Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false h
elm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0701 12:21:14.083142 652196 iso.go:125] acquiring lock: {Name:mk5c70910f61bc270c83609c48670eaf9d7e0602 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0701 12:21:14.086358 652196 out.go:177] * Starting "ha-735960" primary control-plane node in "ha-735960" cluster
I0701 12:21:14.087757 652196 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0701 12:21:14.087794 652196 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
I0701 12:21:14.087805 652196 cache.go:56] Caching tarball of preloaded images
I0701 12:21:14.087882 652196 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0701 12:21:14.087892 652196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0701 12:21:14.088044 652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
I0701 12:21:14.088232 652196 start.go:360] acquireMachinesLock for ha-735960: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0701 12:21:14.088271 652196 start.go:364] duration metric: took 21.615µs to acquireMachinesLock for "ha-735960"
I0701 12:21:14.088285 652196 start.go:96] Skipping create...Using existing machine configuration
I0701 12:21:14.088293 652196 fix.go:54] fixHost starting:
I0701 12:21:14.088547 652196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 12:21:14.088578 652196 main.go:141] libmachine: Launching plugin server for driver kvm2
I0701 12:21:14.103070 652196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42935
I0701 12:21:14.103508 652196 main.go:141] libmachine: () Calling .GetVersion
I0701 12:21:14.104025 652196 main.go:141] libmachine: Using API Version 1
I0701 12:21:14.104050 652196 main.go:141] libmachine: () Calling .SetConfigRaw
I0701 12:21:14.104424 652196 main.go:141] libmachine: () Calling .GetMachineName
I0701 12:21:14.104649 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:14.104829 652196 main.go:141] libmachine: (ha-735960) Calling .GetState
I0701 12:21:14.106608 652196 fix.go:112] recreateIfNeeded on ha-735960: state=Stopped err=<nil>
I0701 12:21:14.106630 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
W0701 12:21:14.106790 652196 fix.go:138] unexpected machine state, will restart: <nil>
I0701 12:21:14.108833 652196 out.go:177] * Restarting existing kvm2 VM for "ha-735960" ...
I0701 12:21:14.110060 652196 main.go:141] libmachine: (ha-735960) Calling .Start
I0701 12:21:14.110234 652196 main.go:141] libmachine: (ha-735960) Ensuring networks are active...
I0701 12:21:14.110976 652196 main.go:141] libmachine: (ha-735960) Ensuring network default is active
I0701 12:21:14.111299 652196 main.go:141] libmachine: (ha-735960) Ensuring network mk-ha-735960 is active
I0701 12:21:14.111665 652196 main.go:141] libmachine: (ha-735960) Getting domain xml...
I0701 12:21:14.112420 652196 main.go:141] libmachine: (ha-735960) Creating domain...
I0701 12:21:15.307133 652196 main.go:141] libmachine: (ha-735960) Waiting to get IP...
I0701 12:21:15.308062 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:15.308526 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:15.308647 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:15.308493 652224 retry.go:31] will retry after 239.111405ms: waiting for machine to come up
I0701 12:21:15.549211 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:15.549648 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:15.549679 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:15.549597 652224 retry.go:31] will retry after 248.256131ms: waiting for machine to come up
I0701 12:21:15.799054 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:15.799481 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:15.799534 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:15.799422 652224 retry.go:31] will retry after 380.468685ms: waiting for machine to come up
I0701 12:21:16.181969 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:16.182432 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:16.182634 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:16.182540 652224 retry.go:31] will retry after 592.847587ms: waiting for machine to come up
I0701 12:21:16.777393 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:16.777837 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:16.777867 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:16.777790 652224 retry.go:31] will retry after 639.749416ms: waiting for machine to come up
I0701 12:21:17.419540 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:17.419941 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:17.419965 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:17.419916 652224 retry.go:31] will retry after 891.768613ms: waiting for machine to come up
I0701 12:21:18.312967 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:18.313455 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:18.313484 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:18.313399 652224 retry.go:31] will retry after 1.112048412s: waiting for machine to come up
I0701 12:21:19.427190 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:19.427624 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:19.427655 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:19.427568 652224 retry.go:31] will retry after 1.150138437s: waiting for machine to come up
I0701 12:21:20.579868 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:20.580291 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:20.580325 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:20.580216 652224 retry.go:31] will retry after 1.129763596s: waiting for machine to come up
I0701 12:21:21.711416 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:21.711892 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:21.711924 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:21.711831 652224 retry.go:31] will retry after 2.143074349s: waiting for machine to come up
I0701 12:21:23.858081 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:23.858617 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:23.858643 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:23.858578 652224 retry.go:31] will retry after 2.436757856s: waiting for machine to come up
I0701 12:21:26.297727 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:26.298302 652196 main.go:141] libmachine: (ha-735960) DBG | unable to find current IP address of domain ha-735960 in network mk-ha-735960
I0701 12:21:26.298352 652196 main.go:141] libmachine: (ha-735960) DBG | I0701 12:21:26.298269 652224 retry.go:31] will retry after 2.609229165s: waiting for machine to come up
I0701 12:21:28.911224 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:28.911698 652196 main.go:141] libmachine: (ha-735960) Found IP for machine: 192.168.39.16
I0701 12:21:28.911722 652196 main.go:141] libmachine: (ha-735960) Reserving static IP address...
I0701 12:21:28.911731 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has current primary IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:28.912401 652196 main.go:141] libmachine: (ha-735960) Reserved static IP address: 192.168.39.16
I0701 12:21:28.912425 652196 main.go:141] libmachine: (ha-735960) Waiting for SSH to be available...
I0701 12:21:28.912468 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:28.912492 652196 main.go:141] libmachine: (ha-735960) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960", mac: "52:54:00:6c:20:7c", ip: "192.168.39.16"}
I0701 12:21:28.912507 652196 main.go:141] libmachine: (ha-735960) DBG | Getting to WaitForSSH function...
I0701 12:21:28.914934 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:28.915448 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:28.915478 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:28.915627 652196 main.go:141] libmachine: (ha-735960) DBG | Using SSH client type: external
I0701 12:21:28.915655 652196 main.go:141] libmachine: (ha-735960) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa (-rw-------)
I0701 12:21:28.915680 652196 main.go:141] libmachine: (ha-735960) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.16 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa -p 22] /usr/bin/ssh <nil>}
I0701 12:21:28.915698 652196 main.go:141] libmachine: (ha-735960) DBG | About to run SSH command:
I0701 12:21:28.915730 652196 main.go:141] libmachine: (ha-735960) DBG | exit 0
I0701 12:21:29.042314 652196 main.go:141] libmachine: (ha-735960) DBG | SSH cmd err, output: <nil>:
I0701 12:21:29.042747 652196 main.go:141] libmachine: (ha-735960) Calling .GetConfigRaw
I0701 12:21:29.043414 652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
I0701 12:21:29.046291 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.046689 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.046714 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.046967 652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
I0701 12:21:29.047187 652196 machine.go:94] provisionDockerMachine start ...
I0701 12:21:29.047211 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:29.047467 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:29.049524 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.049899 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.049924 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.050040 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:29.050240 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.050477 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.050669 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:29.050868 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:29.051073 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0701 12:21:29.051086 652196 main.go:141] libmachine: About to run SSH command:
hostname
I0701 12:21:29.166645 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0701 12:21:29.166687 652196 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
I0701 12:21:29.166983 652196 buildroot.go:166] provisioning hostname "ha-735960"
I0701 12:21:29.167013 652196 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
I0701 12:21:29.167232 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:29.169829 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.170228 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.170260 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.170403 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:29.170603 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.170773 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.170913 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:29.171082 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:29.171259 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0701 12:21:29.171270 652196 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-735960 && echo "ha-735960" | sudo tee /etc/hostname
I0701 12:21:29.295697 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960
I0701 12:21:29.295728 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:29.298625 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.299014 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.299041 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.299233 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:29.299434 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.299641 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.299795 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:29.299954 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:29.300149 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0701 12:21:29.300171 652196 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-735960' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960/g' /etc/hosts;
else
echo '127.0.1.1 ha-735960' | sudo tee -a /etc/hosts;
fi
fi
I0701 12:21:29.418489 652196 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0701 12:21:29.418522 652196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
I0701 12:21:29.418577 652196 buildroot.go:174] setting up certificates
I0701 12:21:29.418593 652196 provision.go:84] configureAuth start
I0701 12:21:29.418612 652196 main.go:141] libmachine: (ha-735960) Calling .GetMachineName
I0701 12:21:29.418889 652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
I0701 12:21:29.421815 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.422238 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.422275 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.422477 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:29.424787 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.425187 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.425216 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.425427 652196 provision.go:143] copyHostCerts
I0701 12:21:29.425466 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
I0701 12:21:29.425530 652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
I0701 12:21:29.425542 652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
I0701 12:21:29.425624 652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
I0701 12:21:29.425732 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
I0701 12:21:29.425753 652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
I0701 12:21:29.425758 652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
I0701 12:21:29.425798 652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
I0701 12:21:29.425856 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
I0701 12:21:29.425872 652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
I0701 12:21:29.425877 652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
I0701 12:21:29.425897 652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
I0701 12:21:29.425958 652196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960 san=[127.0.0.1 192.168.39.16 ha-735960 localhost minikube]
I0701 12:21:29.592360 652196 provision.go:177] copyRemoteCerts
I0701 12:21:29.592437 652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0701 12:21:29.592463 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:29.595489 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.595884 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.595908 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.596131 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:29.596356 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.596515 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:29.596646 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
I0701 12:21:29.684124 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
I0701 12:21:29.684214 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0701 12:21:29.707185 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0701 12:21:29.707254 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0701 12:21:29.729605 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0701 12:21:29.729687 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0701 12:21:29.751505 652196 provision.go:87] duration metric: took 332.894756ms to configureAuth
I0701 12:21:29.751536 652196 buildroot.go:189] setting minikube options for container-runtime
I0701 12:21:29.751802 652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:21:29.751834 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:29.752179 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:29.754903 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.755331 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.755367 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.755494 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:29.755709 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.755868 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.756016 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:29.756168 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:29.756341 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0701 12:21:29.756351 652196 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0701 12:21:29.867557 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0701 12:21:29.867582 652196 buildroot.go:70] root file system type: tmpfs
I0701 12:21:29.867738 652196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0701 12:21:29.867768 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:29.870702 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.871111 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.871152 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.871294 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:29.871532 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.871806 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.871989 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:29.872177 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:29.872347 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0701 12:21:29.872410 652196 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0701 12:21:29.995623 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0701 12:21:29.995671 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:29.998574 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.998969 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:29.999001 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:29.999184 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:29.999403 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.999598 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:29.999772 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:29.999916 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:30.000093 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0701 12:21:30.000109 652196 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0701 12:21:31.849411 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0701 12:21:31.849452 652196 machine.go:97] duration metric: took 2.802248138s to provisionDockerMachine
I0701 12:21:31.849473 652196 start.go:293] postStartSetup for "ha-735960" (driver="kvm2")
I0701 12:21:31.849487 652196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0701 12:21:31.849508 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:31.849934 652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0701 12:21:31.849982 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:31.853029 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:31.853464 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:31.853494 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:31.853656 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:31.853877 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:31.854065 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:31.854242 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
I0701 12:21:31.948096 652196 ssh_runner.go:195] Run: cat /etc/os-release
I0701 12:21:31.952493 652196 info.go:137] Remote host: Buildroot 2023.02.9
I0701 12:21:31.952522 652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
I0701 12:21:31.952580 652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
I0701 12:21:31.952654 652196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
I0701 12:21:31.952664 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
I0701 12:21:31.952750 652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0701 12:21:31.962034 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
I0701 12:21:31.985898 652196 start.go:296] duration metric: took 136.407484ms for postStartSetup
I0701 12:21:31.985953 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:31.986287 652196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0701 12:21:31.986316 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:31.988934 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:31.989328 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:31.989359 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:31.989497 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:31.989724 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:31.989863 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:31.990038 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
I0701 12:21:32.076710 652196 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0701 12:21:32.076807 652196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0701 12:21:32.133792 652196 fix.go:56] duration metric: took 18.045488816s for fixHost
I0701 12:21:32.133863 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:32.136703 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:32.137078 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:32.137110 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:32.137321 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:32.137591 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:32.137793 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:32.137963 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:32.138201 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:32.138518 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.16 22 <nil> <nil>}
I0701 12:21:32.138541 652196 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0701 12:21:32.254973 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836492.215186729
I0701 12:21:32.255001 652196 fix.go:216] guest clock: 1719836492.215186729
I0701 12:21:32.255007 652196 fix.go:229] Guest: 2024-07-01 12:21:32.215186729 +0000 UTC Remote: 2024-07-01 12:21:32.133836118 +0000 UTC m=+18.172225533 (delta=81.350611ms)
I0701 12:21:32.255027 652196 fix.go:200] guest clock delta is within tolerance: 81.350611ms
I0701 12:21:32.255032 652196 start.go:83] releasing machines lock for "ha-735960", held for 18.166751927s
I0701 12:21:32.255050 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:32.255338 652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
I0701 12:21:32.258091 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:32.258459 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:32.258481 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:32.258679 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:32.259224 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:32.259383 652196 main.go:141] libmachine: (ha-735960) Calling .DriverName
I0701 12:21:32.259520 652196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0701 12:21:32.259564 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:32.259693 652196 ssh_runner.go:195] Run: cat /version.json
I0701 12:21:32.259718 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHHostname
I0701 12:21:32.262127 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:32.262481 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:32.262518 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:32.262538 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:32.262653 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:32.262845 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:32.263031 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:32.263054 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:32.263074 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:32.263215 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
I0701 12:21:32.263229 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHPort
I0701 12:21:32.263398 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHKeyPath
I0701 12:21:32.263547 652196 main.go:141] libmachine: (ha-735960) Calling .GetSSHUsername
I0701 12:21:32.263699 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960/id_rsa Username:docker}
I0701 12:21:32.343012 652196 ssh_runner.go:195] Run: systemctl --version
I0701 12:21:32.428409 652196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0701 12:21:32.433742 652196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0701 12:21:32.433815 652196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0701 12:21:32.449052 652196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0701 12:21:32.449087 652196 start.go:494] detecting cgroup driver to use...
I0701 12:21:32.449338 652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0701 12:21:32.471651 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0701 12:21:32.481832 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0701 12:21:32.491470 652196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0701 12:21:32.491548 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0701 12:21:32.501229 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0701 12:21:32.511119 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0701 12:21:32.520826 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0701 12:21:32.530559 652196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0701 12:21:32.542109 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0701 12:21:32.551821 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0701 12:21:32.561403 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0701 12:21:32.571068 652196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0701 12:21:32.579813 652196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0701 12:21:32.588595 652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 12:21:32.705377 652196 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0701 12:21:32.724169 652196 start.go:494] detecting cgroup driver to use...
I0701 12:21:32.724285 652196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0701 12:21:32.739050 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0701 12:21:32.753169 652196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0701 12:21:32.769805 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0701 12:21:32.783750 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0701 12:21:32.797509 652196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0701 12:21:32.821510 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0701 12:21:32.835901 652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0701 12:21:32.854192 652196 ssh_runner.go:195] Run: which cri-dockerd
I0701 12:21:32.858039 652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0701 12:21:32.867652 652196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0701 12:21:32.884216 652196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0701 12:21:33.001636 652196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0701 12:21:33.121229 652196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0701 12:21:33.121419 652196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0701 12:21:33.138482 652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 12:21:33.262395 652196 ssh_runner.go:195] Run: sudo systemctl restart docker
I0701 12:21:35.714549 652196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.452099351s)
I0701 12:21:35.714642 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0701 12:21:35.727946 652196 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I0701 12:21:35.744089 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0701 12:21:35.757426 652196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0701 12:21:35.868089 652196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0701 12:21:35.989857 652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 12:21:36.121343 652196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0701 12:21:36.138520 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0701 12:21:36.152026 652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 12:21:36.271312 652196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0701 12:21:36.351567 652196 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0701 12:21:36.351668 652196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0701 12:21:36.357143 652196 start.go:562] Will wait 60s for crictl version
I0701 12:21:36.357212 652196 ssh_runner.go:195] Run: which crictl
I0701 12:21:36.361384 652196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0701 12:21:36.400372 652196 start.go:578] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.0.1
RuntimeApiVersion: v1
I0701 12:21:36.400446 652196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0701 12:21:36.427941 652196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0701 12:21:36.456620 652196 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.1 ...
I0701 12:21:36.456687 652196 main.go:141] libmachine: (ha-735960) Calling .GetIP
I0701 12:21:36.459384 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:36.459752 652196 main.go:141] libmachine: (ha-735960) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:20:7c", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:15:18 +0000 UTC Type:0 Mac:52:54:00:6c:20:7c Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-735960 Clientid:01:52:54:00:6c:20:7c}
I0701 12:21:36.459781 652196 main.go:141] libmachine: (ha-735960) DBG | domain ha-735960 has defined IP address 192.168.39.16 and MAC address 52:54:00:6c:20:7c in network mk-ha-735960
I0701 12:21:36.459970 652196 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0701 12:21:36.463956 652196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0701 12:21:36.476676 652196 kubeadm.go:877] updating cluster {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:fa
lse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0701 12:21:36.476851 652196 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0701 12:21:36.476914 652196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0701 12:21:36.493466 652196 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
kindest/kindnetd:v20240513-cd2ac642
ghcr.io/kube-vip/kube-vip:v0.8.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0701 12:21:36.493530 652196 docker.go:615] Images already preloaded, skipping extraction
I0701 12:21:36.493620 652196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0701 12:21:36.510908 652196 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
kindest/kindnetd:v20240513-cd2ac642
ghcr.io/kube-vip/kube-vip:v0.8.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0701 12:21:36.510939 652196 cache_images.go:84] Images are preloaded, skipping loading
I0701 12:21:36.510952 652196 kubeadm.go:928] updating node { 192.168.39.16 8443 v1.30.2 docker true true} ...
I0701 12:21:36.511079 652196 kubeadm.go:940] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-735960 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.16
[Install]
config:
{KubernetesVersion:v1.30.2 ClusterName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0701 12:21:36.511139 652196 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0701 12:21:36.536408 652196 cni.go:84] Creating CNI manager for ""
I0701 12:21:36.536430 652196 cni.go:136] multinode detected (4 nodes found), recommending kindnet
I0701 12:21:36.536441 652196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0701 12:21:36.536470 652196 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-735960 NodeName:ha-735960 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0701 12:21:36.536633 652196 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.16
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ha-735960"
kubeletExtraArgs:
node-ip: 192.168.39.16
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0701 12:21:36.536656 652196 kube-vip.go:115] generating kube-vip config ...
I0701 12:21:36.536698 652196 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0701 12:21:36.551906 652196 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0701 12:21:36.552024 652196 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.168.39.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.0
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0701 12:21:36.552078 652196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
I0701 12:21:36.561989 652196 binaries.go:44] Found k8s binaries, skipping transfer
I0701 12:21:36.562059 652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
I0701 12:21:36.571281 652196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
I0701 12:21:36.587480 652196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0701 12:21:36.603596 652196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
I0701 12:21:36.621063 652196 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
I0701 12:21:36.637192 652196 ssh_runner.go:195] Run: grep 192.168.39.254 control-plane.minikube.internal$ /etc/hosts
I0701 12:21:36.640909 652196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0701 12:21:36.652690 652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 12:21:36.768142 652196 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0701 12:21:36.786625 652196 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960 for IP: 192.168.39.16
I0701 12:21:36.786655 652196 certs.go:194] generating shared ca certs ...
I0701 12:21:36.786676 652196 certs.go:226] acquiring lock for ca certs: {Name:mk34e166bfd069e523b2325e14d1812c523bff53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:21:36.786854 652196 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key
I0701 12:21:36.786904 652196 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key
I0701 12:21:36.786915 652196 certs.go:256] generating profile certs ...
I0701 12:21:36.787017 652196 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key
I0701 12:21:36.787046 652196 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af
I0701 12:21:36.787059 652196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.16 192.168.39.86 192.168.39.97 192.168.39.254]
I0701 12:21:37.059263 652196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af ...
I0701 12:21:37.059305 652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af: {Name:mk1be9dc4667506ac6fdcfb1e313edd1292fe7f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:21:37.059483 652196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af ...
I0701 12:21:37.059496 652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af: {Name:mkf9220e489bd04f035dab270c790bb3448ca6be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:21:37.059596 652196 certs.go:381] copying /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt.5c21f4af -> /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt
I0701 12:21:37.059809 652196 certs.go:385] copying /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key.5c21f4af -> /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key
I0701 12:21:37.059969 652196 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key
I0701 12:21:37.059987 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0701 12:21:37.060000 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0701 12:21:37.060014 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0701 12:21:37.060026 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0701 12:21:37.060038 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0701 12:21:37.060054 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0701 12:21:37.060066 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0701 12:21:37.060077 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0701 12:21:37.060165 652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem (1338 bytes)
W0701 12:21:37.060197 652196 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854_empty.pem, impossibly tiny 0 bytes
I0701 12:21:37.060207 652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem (1675 bytes)
I0701 12:21:37.060228 652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem (1078 bytes)
I0701 12:21:37.060248 652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem (1123 bytes)
I0701 12:21:37.060270 652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem (1675 bytes)
I0701 12:21:37.060305 652196 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem (1708 bytes)
I0701 12:21:37.060331 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem -> /usr/share/ca-certificates/637854.pem
I0701 12:21:37.060347 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /usr/share/ca-certificates/6378542.pem
I0701 12:21:37.060359 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0701 12:21:37.061045 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0701 12:21:37.111708 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0701 12:21:37.168649 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0701 12:21:37.204675 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0701 12:21:37.241167 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
I0701 12:21:37.265225 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0701 12:21:37.288613 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0701 12:21:37.312645 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0701 12:21:37.337494 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/637854.pem --> /usr/share/ca-certificates/637854.pem (1338 bytes)
I0701 12:21:37.361044 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /usr/share/ca-certificates/6378542.pem (1708 bytes)
I0701 12:21:37.385424 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0701 12:21:37.409054 652196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0701 12:21:37.426602 652196 ssh_runner.go:195] Run: openssl version
I0701 12:21:37.432129 652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0701 12:21:37.442695 652196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0701 12:21:37.447331 652196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 1 12:05 /usr/share/ca-certificates/minikubeCA.pem
I0701 12:21:37.447415 652196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0701 12:21:37.453215 652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0701 12:21:37.464086 652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/637854.pem && ln -fs /usr/share/ca-certificates/637854.pem /etc/ssl/certs/637854.pem"
I0701 12:21:37.474527 652196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/637854.pem
I0701 12:21:37.479057 652196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 1 12:11 /usr/share/ca-certificates/637854.pem
I0701 12:21:37.479123 652196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/637854.pem
I0701 12:21:37.484641 652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/637854.pem /etc/ssl/certs/51391683.0"
I0701 12:21:37.495175 652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6378542.pem && ln -fs /usr/share/ca-certificates/6378542.pem /etc/ssl/certs/6378542.pem"
I0701 12:21:37.505961 652196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6378542.pem
I0701 12:21:37.510286 652196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 1 12:11 /usr/share/ca-certificates/6378542.pem
I0701 12:21:37.510365 652196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6378542.pem
I0701 12:21:37.516124 652196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6378542.pem /etc/ssl/certs/3ec20f2e.0"
I0701 12:21:37.527154 652196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0701 12:21:37.532024 652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0701 12:21:37.538145 652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0701 12:21:37.544280 652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0701 12:21:37.550448 652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0701 12:21:37.556356 652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0701 12:21:37.562174 652196 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0701 12:21:37.568144 652196 kubeadm.go:391] StartCluster: {Name:ha-735960 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19142/minikube-v1.33.1-1719412936-19142-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-735960 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.86 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.60 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false
freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0701 12:21:37.568362 652196 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0701 12:21:37.586457 652196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
W0701 12:21:37.596129 652196 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
I0701 12:21:37.596158 652196 kubeadm.go:407] found existing configuration files, will attempt cluster restart
I0701 12:21:37.596164 652196 kubeadm.go:587] restartPrimaryControlPlane start ...
I0701 12:21:37.596237 652196 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0701 12:21:37.605715 652196 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0701 12:21:37.606193 652196 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-735960" does not appear in /home/jenkins/minikube-integration/19166-630650/kubeconfig
I0701 12:21:37.606354 652196 kubeconfig.go:62] /home/jenkins/minikube-integration/19166-630650/kubeconfig needs updating (will repair): [kubeconfig missing "ha-735960" cluster setting kubeconfig missing "ha-735960" context setting]
I0701 12:21:37.606708 652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:21:37.607135 652196 loader.go:395] Config loaded from file: /home/jenkins/minikube-integration/19166-630650/kubeconfig
I0701 12:21:37.607365 652196 kapi.go:59] client config for ha-735960: &rest.Config{Host:"https://192.168.39.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/client.key", CAFile:"/home/jenkins/minikube-integration/19166-630650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfbb80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0701 12:21:37.607752 652196 cert_rotation.go:137] Starting client certificate rotation controller
I0701 12:21:37.608047 652196 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0701 12:21:37.617685 652196 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.16
I0701 12:21:37.617715 652196 kubeadm.go:591] duration metric: took 21.544408ms to restartPrimaryControlPlane
I0701 12:21:37.617725 652196 kubeadm.go:393] duration metric: took 49.593354ms to StartCluster
I0701 12:21:37.617748 652196 settings.go:142] acquiring lock: {Name:mk6f7c85ea77a73ff0ac851454721f2e6e309153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:21:37.617834 652196 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19166-630650/kubeconfig
I0701 12:21:37.618535 652196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-630650/kubeconfig: {Name:mke3ef9d019eff4edd273b00c416fd77ed009242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0701 12:21:37.618754 652196 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0701 12:21:37.618777 652196 start.go:240] waiting for startup goroutines ...
I0701 12:21:37.618792 652196 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0701 12:21:37.619028 652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:21:37.621683 652196 out.go:177] * Enabled addons:
I0701 12:21:37.622979 652196 addons.go:510] duration metric: took 4.192015ms for enable addons: enabled=[]
I0701 12:21:37.623011 652196 start.go:245] waiting for cluster config update ...
I0701 12:21:37.623019 652196 start.go:254] writing updated cluster config ...
I0701 12:21:37.624600 652196 out.go:177]
I0701 12:21:37.626023 652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:21:37.626124 652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
I0701 12:21:37.627745 652196 out.go:177] * Starting "ha-735960-m02" control-plane node in "ha-735960" cluster
I0701 12:21:37.628946 652196 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0701 12:21:37.628969 652196 cache.go:56] Caching tarball of preloaded images
I0701 12:21:37.629060 652196 preload.go:173] Found /home/jenkins/minikube-integration/19166-630650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0701 12:21:37.629072 652196 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0701 12:21:37.629161 652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
I0701 12:21:37.629353 652196 start.go:360] acquireMachinesLock for ha-735960-m02: {Name:mk43a6c0c0c15a237623df377ad65b82186c0ad8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0701 12:21:37.629411 652196 start.go:364] duration metric: took 31.79µs to acquireMachinesLock for "ha-735960-m02"
I0701 12:21:37.629427 652196 start.go:96] Skipping create...Using existing machine configuration
I0701 12:21:37.629440 652196 fix.go:54] fixHost starting: m02
I0701 12:21:37.629698 652196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0701 12:21:37.629747 652196 main.go:141] libmachine: Launching plugin server for driver kvm2
I0701 12:21:37.644981 652196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
I0701 12:21:37.645473 652196 main.go:141] libmachine: () Calling .GetVersion
I0701 12:21:37.645947 652196 main.go:141] libmachine: Using API Version 1
I0701 12:21:37.645969 652196 main.go:141] libmachine: () Calling .SetConfigRaw
I0701 12:21:37.646284 652196 main.go:141] libmachine: () Calling .GetMachineName
I0701 12:21:37.646523 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
I0701 12:21:37.646646 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetState
I0701 12:21:37.648195 652196 fix.go:112] recreateIfNeeded on ha-735960-m02: state=Stopped err=<nil>
I0701 12:21:37.648228 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
W0701 12:21:37.648406 652196 fix.go:138] unexpected machine state, will restart: <nil>
I0701 12:21:37.650489 652196 out.go:177] * Restarting existing kvm2 VM for "ha-735960-m02" ...
I0701 12:21:37.651975 652196 main.go:141] libmachine: (ha-735960-m02) Calling .Start
I0701 12:21:37.652186 652196 main.go:141] libmachine: (ha-735960-m02) Ensuring networks are active...
I0701 12:21:37.652916 652196 main.go:141] libmachine: (ha-735960-m02) Ensuring network default is active
I0701 12:21:37.653282 652196 main.go:141] libmachine: (ha-735960-m02) Ensuring network mk-ha-735960 is active
I0701 12:21:37.653613 652196 main.go:141] libmachine: (ha-735960-m02) Getting domain xml...
I0701 12:21:37.654254 652196 main.go:141] libmachine: (ha-735960-m02) Creating domain...
I0701 12:21:38.852369 652196 main.go:141] libmachine: (ha-735960-m02) Waiting to get IP...
I0701 12:21:38.853358 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:38.853762 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:38.853832 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:38.853747 652384 retry.go:31] will retry after 295.798088ms: waiting for machine to come up
I0701 12:21:39.151332 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:39.151886 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:39.151912 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:39.151845 652384 retry.go:31] will retry after 255.18729ms: waiting for machine to come up
I0701 12:21:39.408310 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:39.408739 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:39.408792 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:39.408689 652384 retry.go:31] will retry after 457.740061ms: waiting for machine to come up
I0701 12:21:39.868295 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:39.868702 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:39.868736 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:39.868629 652384 retry.go:31] will retry after 548.674851ms: waiting for machine to come up
I0701 12:21:40.419597 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:40.420069 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:40.420100 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:40.420009 652384 retry.go:31] will retry after 755.113146ms: waiting for machine to come up
I0701 12:21:41.176960 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:41.177380 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:41.177429 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:41.177309 652384 retry.go:31] will retry after 739.288718ms: waiting for machine to come up
I0701 12:21:41.918305 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:41.918853 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:41.918884 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:41.918789 652384 retry.go:31] will retry after 722.041404ms: waiting for machine to come up
I0701 12:21:42.642704 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:42.643188 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:42.643219 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:42.643113 652384 retry.go:31] will retry after 1.139279839s: waiting for machine to come up
I0701 12:21:43.784719 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:43.785159 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:43.785193 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:43.785114 652384 retry.go:31] will retry after 1.276779849s: waiting for machine to come up
I0701 12:21:45.063522 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:45.064026 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:45.064058 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:45.063969 652384 retry.go:31] will retry after 2.284492799s: waiting for machine to come up
I0701 12:21:47.351530 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:47.352076 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:47.352113 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:47.351988 652384 retry.go:31] will retry after 2.171521184s: waiting for machine to come up
I0701 12:21:49.526162 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:49.526566 652196 main.go:141] libmachine: (ha-735960-m02) DBG | unable to find current IP address of domain ha-735960-m02 in network mk-ha-735960
I0701 12:21:49.526590 652196 main.go:141] libmachine: (ha-735960-m02) DBG | I0701 12:21:49.526523 652384 retry.go:31] will retry after 3.533181759s: waiting for machine to come up
I0701 12:21:53.061482 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.062025 652196 main.go:141] libmachine: (ha-735960-m02) Found IP for machine: 192.168.39.86
I0701 12:21:53.062048 652196 main.go:141] libmachine: (ha-735960-m02) Reserving static IP address...
I0701 12:21:53.062060 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has current primary IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.062473 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.062504 652196 main.go:141] libmachine: (ha-735960-m02) DBG | skip adding static IP to network mk-ha-735960 - found existing host DHCP lease matching {name: "ha-735960-m02", mac: "52:54:00:0b:2f:ce", ip: "192.168.39.86"}
I0701 12:21:53.062534 652196 main.go:141] libmachine: (ha-735960-m02) Reserved static IP address: 192.168.39.86
I0701 12:21:53.062554 652196 main.go:141] libmachine: (ha-735960-m02) Waiting for SSH to be available...
I0701 12:21:53.062566 652196 main.go:141] libmachine: (ha-735960-m02) DBG | Getting to WaitForSSH function...
I0701 12:21:53.064461 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.064796 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.064828 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.064893 652196 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH client type: external
I0701 12:21:53.064938 652196 main.go:141] libmachine: (ha-735960-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa (-rw-------)
I0701 12:21:53.064965 652196 main.go:141] libmachine: (ha-735960-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.86 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
I0701 12:21:53.064981 652196 main.go:141] libmachine: (ha-735960-m02) DBG | About to run SSH command:
I0701 12:21:53.065000 652196 main.go:141] libmachine: (ha-735960-m02) DBG | exit 0
I0701 12:21:53.190266 652196 main.go:141] libmachine: (ha-735960-m02) DBG | SSH cmd err, output: <nil>:
I0701 12:21:53.190636 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetConfigRaw
I0701 12:21:53.191272 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
I0701 12:21:53.193658 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.193994 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.194027 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.194274 652196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-630650/.minikube/profiles/ha-735960/config.json ...
I0701 12:21:53.194544 652196 machine.go:94] provisionDockerMachine start ...
I0701 12:21:53.194562 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
I0701 12:21:53.194814 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:53.196894 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.197262 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.197291 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.197414 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:53.197654 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:53.197829 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:53.198021 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:53.198185 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:53.198432 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.86 22 <nil> <nil>}
I0701 12:21:53.198448 652196 main.go:141] libmachine: About to run SSH command:
hostname
I0701 12:21:53.306480 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0701 12:21:53.306526 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
I0701 12:21:53.306839 652196 buildroot.go:166] provisioning hostname "ha-735960-m02"
I0701 12:21:53.306870 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
I0701 12:21:53.307063 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:53.309645 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.310086 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.310116 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.310307 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:53.310514 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:53.310689 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:53.310820 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:53.310997 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:53.311210 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.86 22 <nil> <nil>}
I0701 12:21:53.311225 652196 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-735960-m02 && echo "ha-735960-m02" | sudo tee /etc/hostname
I0701 12:21:53.434956 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-735960-m02
I0701 12:21:53.434992 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:53.437612 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.438016 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.438040 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.438190 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:53.438418 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:53.438601 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:53.438768 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:53.438926 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:53.439106 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.86 22 <nil> <nil>}
I0701 12:21:53.439128 652196 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-735960-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-735960-m02/g' /etc/hosts;
else
echo '127.0.1.1 ha-735960-m02' | sudo tee -a /etc/hosts;
fi
fi
I0701 12:21:53.559115 652196 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0701 12:21:53.559146 652196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19166-630650/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-630650/.minikube}
I0701 12:21:53.559163 652196 buildroot.go:174] setting up certificates
I0701 12:21:53.559174 652196 provision.go:84] configureAuth start
I0701 12:21:53.559186 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetMachineName
I0701 12:21:53.559514 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
I0701 12:21:53.562119 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.562516 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.562550 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.562753 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:53.564741 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.565063 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.565082 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.565233 652196 provision.go:143] copyHostCerts
I0701 12:21:53.565266 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
I0701 12:21:53.565309 652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem, removing ...
I0701 12:21:53.565318 652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem
I0701 12:21:53.565379 652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/ca.pem (1078 bytes)
I0701 12:21:53.565450 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
I0701 12:21:53.565468 652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem, removing ...
I0701 12:21:53.565474 652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem
I0701 12:21:53.565492 652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/cert.pem (1123 bytes)
I0701 12:21:53.565533 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
I0701 12:21:53.565549 652196 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem, removing ...
I0701 12:21:53.565555 652196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem
I0701 12:21:53.565570 652196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-630650/.minikube/key.pem (1675 bytes)
I0701 12:21:53.565618 652196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca-key.pem org=jenkins.ha-735960-m02 san=[127.0.0.1 192.168.39.86 ha-735960-m02 localhost minikube]
I0701 12:21:53.749696 652196 provision.go:177] copyRemoteCerts
I0701 12:21:53.749755 652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0701 12:21:53.749780 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:53.752460 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.752780 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.752813 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.752952 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:53.753159 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:53.753385 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:53.753547 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
I0701 12:21:53.835990 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0701 12:21:53.836060 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0701 12:21:53.858665 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem -> /etc/docker/server.pem
I0701 12:21:53.858753 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0701 12:21:53.880281 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0701 12:21:53.880367 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0701 12:21:53.902677 652196 provision.go:87] duration metric: took 343.48703ms to configureAuth
I0701 12:21:53.902709 652196 buildroot.go:189] setting minikube options for container-runtime
I0701 12:21:53.903020 652196 config.go:182] Loaded profile config "ha-735960": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0701 12:21:53.903053 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
I0701 12:21:53.903351 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:53.905929 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.906189 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:53.906216 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:53.906438 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:53.906667 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:53.906826 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:53.906966 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:53.907119 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:53.907282 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.86 22 <nil> <nil>}
I0701 12:21:53.907294 652196 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0701 12:21:54.019474 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0701 12:21:54.019501 652196 buildroot.go:70] root file system type: tmpfs
I0701 12:21:54.019656 652196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0701 12:21:54.019681 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:54.022816 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:54.023184 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:54.023208 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:54.023371 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:54.023579 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:54.023787 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:54.023946 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:54.024146 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:54.024319 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.86 22 <nil> <nil>}
I0701 12:21:54.024384 652196 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.168.39.16"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0701 12:21:54.147740 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.168.39.16
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0701 12:21:54.147778 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:54.150547 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:54.151173 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:54.151208 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:54.151345 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:54.151561 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:54.151771 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:54.151918 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:54.152095 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:54.152266 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.86 22 <nil> <nil>}
I0701 12:21:54.152281 652196 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0701 12:21:56.028628 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0701 12:21:56.028682 652196 machine.go:97] duration metric: took 2.834118436s to provisionDockerMachine
I0701 12:21:56.028701 652196 start.go:293] postStartSetup for "ha-735960-m02" (driver="kvm2")
I0701 12:21:56.028716 652196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0701 12:21:56.028738 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
I0701 12:21:56.029099 652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0701 12:21:56.029132 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:56.031882 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.032264 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:56.032289 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.032433 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:56.032608 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:56.032817 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:56.032971 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
I0701 12:21:56.117309 652196 ssh_runner.go:195] Run: cat /etc/os-release
I0701 12:21:56.121231 652196 info.go:137] Remote host: Buildroot 2023.02.9
I0701 12:21:56.121263 652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/addons for local assets ...
I0701 12:21:56.121324 652196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-630650/.minikube/files for local assets ...
I0701 12:21:56.121391 652196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> 6378542.pem in /etc/ssl/certs
I0701 12:21:56.121402 652196 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem -> /etc/ssl/certs/6378542.pem
I0701 12:21:56.121478 652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0701 12:21:56.130302 652196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-630650/.minikube/files/etc/ssl/certs/6378542.pem --> /etc/ssl/certs/6378542.pem (1708 bytes)
I0701 12:21:56.152776 652196 start.go:296] duration metric: took 124.058691ms for postStartSetup
I0701 12:21:56.152821 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
I0701 12:21:56.153142 652196 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0701 12:21:56.153170 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:56.155689 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.156094 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:56.156120 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.156332 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:56.156555 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:56.156727 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:56.156917 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
I0701 12:21:56.240391 652196 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0701 12:21:56.240454 652196 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0701 12:21:56.280843 652196 fix.go:56] duration metric: took 18.651393475s for fixHost
I0701 12:21:56.280895 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:56.283268 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.283590 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:56.283617 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.283860 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:56.284107 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:56.284307 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:56.284501 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:56.284686 652196 main.go:141] libmachine: Using SSH client type: native
I0701 12:21:56.284888 652196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil> [] 0s} 192.168.39.86 22 <nil> <nil>}
I0701 12:21:56.284903 652196 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0701 12:21:56.398873 652196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1719836516.359963406
I0701 12:21:56.398893 652196 fix.go:216] guest clock: 1719836516.359963406
I0701 12:21:56.398901 652196 fix.go:229] Guest: 2024-07-01 12:21:56.359963406 +0000 UTC Remote: 2024-07-01 12:21:56.280872467 +0000 UTC m=+42.319261894 (delta=79.090939ms)
I0701 12:21:56.398919 652196 fix.go:200] guest clock delta is within tolerance: 79.090939ms
I0701 12:21:56.398924 652196 start.go:83] releasing machines lock for "ha-735960-m02", held for 18.769503298s
I0701 12:21:56.398940 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
I0701 12:21:56.399198 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetIP
I0701 12:21:56.401982 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.402404 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:56.402436 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.404680 652196 out.go:177] * Found network options:
I0701 12:21:56.406167 652196 out.go:177] - NO_PROXY=192.168.39.16
W0701 12:21:56.407620 652196 proxy.go:119] fail to check proxy env: Error ip not in block
I0701 12:21:56.407664 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
I0701 12:21:56.408285 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
I0701 12:21:56.408498 652196 main.go:141] libmachine: (ha-735960-m02) Calling .DriverName
I0701 12:21:56.408606 652196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0701 12:21:56.408647 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
W0701 12:21:56.408741 652196 proxy.go:119] fail to check proxy env: Error ip not in block
I0701 12:21:56.408826 652196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0701 12:21:56.408849 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHHostname
I0701 12:21:56.411170 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.411559 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:56.411598 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.411651 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.411933 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:56.412130 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:56.412221 652196 main.go:141] libmachine: (ha-735960-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0b:2f:ce", ip: ""} in network mk-ha-735960: {Iface:virbr1 ExpiryTime:2024-07-01 13:21:47 +0000 UTC Type:0 Mac:52:54:00:0b:2f:ce Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:ha-735960-m02 Clientid:01:52:54:00:0b:2f:ce}
I0701 12:21:56.412247 652196 main.go:141] libmachine: (ha-735960-m02) DBG | domain ha-735960-m02 has defined IP address 192.168.39.86 and MAC address 52:54:00:0b:2f:ce in network mk-ha-735960
I0701 12:21:56.412295 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:56.412519 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHPort
I0701 12:21:56.412508 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
I0701 12:21:56.412720 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHKeyPath
I0701 12:21:56.412871 652196 main.go:141] libmachine: (ha-735960-m02) Calling .GetSSHUsername
I0701 12:21:56.412987 652196 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19166-630650/.minikube/machines/ha-735960-m02/id_rsa Username:docker}
W0701 12:21:56.492511 652196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0701 12:21:56.492595 652196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0701 12:21:56.515270 652196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0701 12:21:56.515305 652196 start.go:494] detecting cgroup driver to use...
I0701 12:21:56.515419 652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0701 12:21:56.549004 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0701 12:21:56.560711 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0701 12:21:56.578763 652196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0701 12:21:56.578832 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0701 12:21:56.589742 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0701 12:21:56.606645 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0701 12:21:56.620036 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0701 12:21:56.632033 652196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0701 12:21:56.642458 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0701 12:21:56.653078 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0701 12:21:56.663035 652196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0701 12:21:56.673203 652196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0701 12:21:56.682348 652196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0701 12:21:56.691388 652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 12:21:56.798709 652196 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0701 12:21:56.821386 652196 start.go:494] detecting cgroup driver to use...
I0701 12:21:56.821493 652196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0701 12:21:56.841303 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0701 12:21:56.857934 652196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0701 12:21:56.877318 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0701 12:21:56.889777 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0701 12:21:56.901844 652196 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0701 12:21:56.927595 652196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0701 12:21:56.940849 652196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0701 12:21:56.958116 652196 ssh_runner.go:195] Run: which cri-dockerd
I0701 12:21:56.961664 652196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0701 12:21:56.969985 652196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0701 12:21:56.985048 652196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0701 12:21:57.096072 652196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0701 12:21:57.211289 652196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0701 12:21:57.211354 652196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0701 12:21:57.227069 652196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0701 12:21:57.341292 652196 ssh_runner.go:195] Run: sudo systemctl restart docker
I0701 12:22:58.423195 652196 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.08185799s)
I0701 12:22:58.423268 652196 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0701 12:22:58.444321 652196 out.go:177]
W0701 12:22:58.445678 652196 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
Jul 01 12:21:54 ha-735960-m02 systemd[1]: Starting Docker Application Container Engine...
Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.524329635Z" level=info msg="Starting up"
Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.525054987Z" level=info msg="containerd not running, starting managed containerd"
Jul 01 12:21:54 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:54.525787354Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=513
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.553695593Z" level=info msg="starting containerd" revision=ae71819c4f5e67bb4d5ae76a6b735f29cc25774e version=v1.7.18
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572290393Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572432449Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572518940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572558429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.572981597Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573093539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573355911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573425452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573469593Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573505057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.573782642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.574848351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.576951334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577031827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577253828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577304329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577551634Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577624370Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.577665230Z" level=info msg="metadata content store policy set" policy=shared
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.580979416Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581128476Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581284824Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581371031Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581432559Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581524784Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.581996275Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582118070Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582162131Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582245548Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582319648Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582368655Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582407448Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582445279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582484550Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582521928Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582558472Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582601035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582656126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582693985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582741537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582779033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582815513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582853076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582892671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582938669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.582980248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583032987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583083364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583122445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583161506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583262727Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583333396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583373579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583414811Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583520612Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583751718Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583800626Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583838317Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583874340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583912430Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.583991424Z" level=info msg="NRI interface is disabled by configuration."
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584364167Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584467963Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584654486Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jul 01 12:21:54 ha-735960-m02 dockerd[513]: time="2024-07-01T12:21:54.584785754Z" level=info msg="containerd successfully booted in 0.032655s"
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.555699119Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.620790434Z" level=info msg="Loading containers: start."
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.813021303Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.888534738Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.940299653Z" level=info msg="Loading containers: done."
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.956534314Z" level=info msg="Docker daemon" commit=ff1e2c0 containerd-snapshotter=false storage-driver=overlay2 version=27.0.1
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.956851438Z" level=info msg="Daemon has completed initialization"
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.988054435Z" level=info msg="API listen on /var/run/docker.sock"
Jul 01 12:21:55 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:55.988129188Z" level=info msg="API listen on [::]:2376"
Jul 01 12:21:55 ha-735960-m02 systemd[1]: Started Docker Application Container Engine.
Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.316115209Z" level=info msg="Processing signal 'terminated'"
Jul 01 12:21:57 ha-735960-m02 systemd[1]: Stopping Docker Application Container Engine...
Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317321834Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317386191Z" level=info msg="Daemon shutdown complete"
Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317447382Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jul 01 12:21:57 ha-735960-m02 dockerd[506]: time="2024-07-01T12:21:57.317464543Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jul 01 12:21:58 ha-735960-m02 systemd[1]: docker.service: Deactivated successfully.
Jul 01 12:21:58 ha-735960-m02 systemd[1]: Stopped Docker Application Container Engine.
Jul 01 12:21:58 ha-735960-m02 systemd[1]: Starting Docker Application Container Engine...
Jul 01 12:21:58 ha-735960-m02 dockerd[1188]: time="2024-07-01T12:21:58.364754006Z" level=info msg="Starting up"
Jul 01 12:22:58 ha-735960-m02 dockerd[1188]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 01 12:22:58 ha-735960-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 01 12:22:58 ha-735960-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 01 12:22:58 ha-735960-m02 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
W0701 12:22:58.445741 652196 out.go:239] *
W0701 12:22:58.447325 652196 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0701 12:22:58.449434 652196 out.go:177]
==> Docker <==
Jul 01 12:21:44 ha-735960 dockerd[1190]: time="2024-07-01T12:21:44.208507474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 01 12:22:05 ha-735960 dockerd[1184]: time="2024-07-01T12:22:05.425890009Z" level=info msg="ignoring event" container=d97b6df80577316a9cf70b2af0f8d52bb2bd7071ff932a8f1f03df9497724786 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 12:22:05 ha-735960 dockerd[1190]: time="2024-07-01T12:22:05.426406022Z" level=info msg="shim disconnected" id=d97b6df80577316a9cf70b2af0f8d52bb2bd7071ff932a8f1f03df9497724786 namespace=moby
Jul 01 12:22:05 ha-735960 dockerd[1190]: time="2024-07-01T12:22:05.427162251Z" level=warning msg="cleaning up after shim disconnected" id=d97b6df80577316a9cf70b2af0f8d52bb2bd7071ff932a8f1f03df9497724786 namespace=moby
Jul 01 12:22:05 ha-735960 dockerd[1190]: time="2024-07-01T12:22:05.427275716Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 01 12:22:06 ha-735960 dockerd[1190]: time="2024-07-01T12:22:06.439101176Z" level=info msg="shim disconnected" id=ad4259a9c8ee03ff4c6910c68c5c866481fede150d57267cdc957e46aca4ef80 namespace=moby
Jul 01 12:22:06 ha-735960 dockerd[1184]: time="2024-07-01T12:22:06.441768147Z" level=info msg="ignoring event" container=ad4259a9c8ee03ff4c6910c68c5c866481fede150d57267cdc957e46aca4ef80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 12:22:06 ha-735960 dockerd[1190]: time="2024-07-01T12:22:06.442054407Z" level=warning msg="cleaning up after shim disconnected" id=ad4259a9c8ee03ff4c6910c68c5c866481fede150d57267cdc957e46aca4ef80 namespace=moby
Jul 01 12:22:06 ha-735960 dockerd[1190]: time="2024-07-01T12:22:06.442214156Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 01 12:22:24 ha-735960 dockerd[1190]: time="2024-07-01T12:22:24.071877635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 01 12:22:24 ha-735960 dockerd[1190]: time="2024-07-01T12:22:24.072398316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 01 12:22:24 ha-735960 dockerd[1190]: time="2024-07-01T12:22:24.072506177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 01 12:22:24 ha-735960 dockerd[1190]: time="2024-07-01T12:22:24.072761669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 01 12:22:27 ha-735960 dockerd[1190]: time="2024-07-01T12:22:27.091757274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 01 12:22:27 ha-735960 dockerd[1190]: time="2024-07-01T12:22:27.091819785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 01 12:22:27 ha-735960 dockerd[1190]: time="2024-07-01T12:22:27.091834055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 01 12:22:27 ha-735960 dockerd[1190]: time="2024-07-01T12:22:27.092367194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 01 12:22:47 ha-735960 dockerd[1184]: time="2024-07-01T12:22:47.577930706Z" level=info msg="ignoring event" container=e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 12:22:47 ha-735960 dockerd[1190]: time="2024-07-01T12:22:47.578670317Z" level=info msg="shim disconnected" id=e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30 namespace=moby
Jul 01 12:22:47 ha-735960 dockerd[1190]: time="2024-07-01T12:22:47.578983718Z" level=warning msg="cleaning up after shim disconnected" id=e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30 namespace=moby
Jul 01 12:22:47 ha-735960 dockerd[1190]: time="2024-07-01T12:22:47.579585559Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 01 12:22:48 ha-735960 dockerd[1184]: time="2024-07-01T12:22:48.582829662Z" level=info msg="ignoring event" container=829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 01 12:22:48 ha-735960 dockerd[1190]: time="2024-07-01T12:22:48.583282892Z" level=info msg="shim disconnected" id=829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d namespace=moby
Jul 01 12:22:48 ha-735960 dockerd[1190]: time="2024-07-01T12:22:48.584157023Z" level=warning msg="cleaning up after shim disconnected" id=829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d namespace=moby
Jul 01 12:22:48 ha-735960 dockerd[1190]: time="2024-07-01T12:22:48.584285564Z" level=info msg="cleaning up dead shim" namespace=moby
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
e546c39248bc8 56ce0fd9fb532 32 seconds ago Exited kube-apiserver 2 16dae930b4edb kube-apiserver-ha-735960
829fe19c75ce3 e874818b3caac 35 seconds ago Exited kube-controller-manager 2 5e2a9b91be69c kube-controller-manager-ha-735960
cecb3dd12e16e 38af8ddebf499 About a minute ago Running kube-vip 0 8d1562fb4b8c3 kube-vip-ha-735960
6a200a6b49020 3861cfcd7c04c About a minute ago Running etcd 1 5b1097d48d724 etcd-ha-735960
2d71437c5f06d 7820c83aa1394 About a minute ago Running kube-scheduler 1 fa7dea6a1b8bd kube-scheduler-ha-735960
14112a4d8f2cb 38af8ddebf499 2 minutes ago Exited kube-vip 1 46ab74fdab7e2 kube-vip-ha-735960
1ef6d9da6a9c5 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12 4 minutes ago Exited busybox 0 1f5ccc7b0e655 busybox-fc5497c4f-pjfcw
a9c30cd4b3455 cbb01a7bd410d 6 minutes ago Exited coredns 0 7b4b4f7ec4b63 coredns-7db6d8ff4d-nk4lf
769b0b8751350 cbb01a7bd410d 6 minutes ago Exited coredns 0 7a349370d4f88 coredns-7db6d8ff4d-p4rtz
97d58c94f3fdc 6e38f40d628db 6 minutes ago Exited storage-provisioner 0 9226633ad878a storage-provisioner
f472aef5302fd kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8 6 minutes ago Exited kindnet-cni 0 ab9c74a502295 kindnet-7f6hm
6116abe6039dc 53c535741fb44 6 minutes ago Exited kube-proxy 0 da69191059798 kube-proxy-lphzn
cb63d54411807 7820c83aa1394 7 minutes ago Exited kube-scheduler 0 19b6b0e6ed64e kube-scheduler-ha-735960
24c8926d2b31d 3861cfcd7c04c 7 minutes ago Exited etcd 0 d3b914e19ca22 etcd-ha-735960
==> coredns [769b0b875135] <==
[INFO] 10.244.1.2:44221 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000082797s
[INFO] 10.244.2.2:33797 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157729s
[INFO] 10.244.2.2:52590 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004055351s
[INFO] 10.244.2.2:46983 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003253494s
[INFO] 10.244.2.2:56187 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205215s
[INFO] 10.244.2.2:41086 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158307s
[INFO] 10.244.0.4:47783 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097077s
[INFO] 10.244.0.4:50743 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001523s
[INFO] 10.244.0.4:37141 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138763s
[INFO] 10.244.1.2:32981 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132906s
[INFO] 10.244.1.2:36762 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001646552s
[INFO] 10.244.1.2:33583 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072434s
[INFO] 10.244.2.2:37027 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000156518s
[INFO] 10.244.2.2:58435 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104504s
[INFO] 10.244.2.2:36107 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090251s
[INFO] 10.244.0.4:44792 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000227164s
[INFO] 10.244.0.4:56557 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140925s
[INFO] 10.244.1.2:38284 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000232717s
[INFO] 10.244.2.2:37664 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135198s
[INFO] 10.244.2.2:60876 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00032392s
[INFO] 10.244.1.2:37461 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133264s
[INFO] 10.244.1.2:45182 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117372s
[INFO] 10.244.1.2:37156 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000240093s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [a9c30cd4b345] <==
[INFO] 10.244.0.4:57095 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002251804s
[INFO] 10.244.0.4:42381 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081215s
[INFO] 10.244.0.4:53499 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00124929s
[INFO] 10.244.0.4:41287 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174281s
[INFO] 10.244.0.4:36433 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142863s
[INFO] 10.244.1.2:47688 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130034s
[INFO] 10.244.1.2:40562 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00183587s
[INFO] 10.244.1.2:35137 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000771s
[INFO] 10.244.1.2:37798 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184282s
[INFO] 10.244.1.2:43876 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008807s
[INFO] 10.244.2.2:35039 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119303s
[INFO] 10.244.0.4:53229 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090292s
[INFO] 10.244.0.4:42097 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00011308s
[INFO] 10.244.1.2:42114 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130767s
[INFO] 10.244.1.2:56638 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110707s
[INFO] 10.244.1.2:55805 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093484s
[INFO] 10.244.2.2:51675 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000145117s
[INFO] 10.244.2.2:56838 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000136843s
[INFO] 10.244.0.4:60951 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162889s
[INFO] 10.244.0.4:34776 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112367s
[INFO] 10.244.0.4:45397 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000073771s
[INFO] 10.244.0.4:52372 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000058127s
[INFO] 10.244.1.2:41033 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000131962s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E0701 12:22:59.377640 2586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
E0701 12:22:59.378152 2586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
E0701 12:22:59.379637 2586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
E0701 12:22:59.379997 2586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
E0701 12:22:59.381475 2586 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
The connection to the server localhost:8443 was refused - did you specify the right host or port?
==> dmesg <==
[Jul 1 12:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.050877] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
[ +0.036108] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +4.421397] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +1.628587] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
[ +2.463440] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
[ +4.322115] systemd-fstab-generator[476]: Ignoring "noauto" option for root device
[ +0.057798] kauditd_printk_skb: 1 callbacks suppressed
[ +0.060958] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
[ +2.352578] systemd-fstab-generator[1113]: Ignoring "noauto" option for root device
[ +0.297044] systemd-fstab-generator[1150]: Ignoring "noauto" option for root device
[ +0.121689] systemd-fstab-generator[1162]: Ignoring "noauto" option for root device
[ +0.127513] systemd-fstab-generator[1176]: Ignoring "noauto" option for root device
[ +2.293985] kauditd_printk_skb: 195 callbacks suppressed
[ +0.325101] systemd-fstab-generator[1411]: Ignoring "noauto" option for root device
[ +0.108851] systemd-fstab-generator[1423]: Ignoring "noauto" option for root device
[ +0.138237] systemd-fstab-generator[1435]: Ignoring "noauto" option for root device
[ +0.156114] systemd-fstab-generator[1450]: Ignoring "noauto" option for root device
[ +0.494872] systemd-fstab-generator[1603]: Ignoring "noauto" option for root device
[ +6.977462] kauditd_printk_skb: 176 callbacks suppressed
[ +11.291301] kauditd_printk_skb: 40 callbacks suppressed
==> etcd [24c8926d2b31] <==
{"level":"info","ts":"2024-07-01T12:21:01.297933Z","caller":"traceutil/trace.go:171","msg":"trace[249123960] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; }","duration":"4.106112275s","start":"2024-07-01T12:20:57.191803Z","end":"2024-07-01T12:21:01.297915Z","steps":["trace[249123960] 'agreement among raft nodes before linearized reading' (duration: 4.10601913s)"],"step_count":1}
{"level":"warn","ts":"2024-07-01T12:21:01.298006Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-01T12:20:57.191796Z","time spent":"4.106166982s","remote":"127.0.0.1:56240","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":0,"response size":0,"request content":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true "}
2024/07/01 12:21:01 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2024/07/01 12:21:01 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2024/07/01 12:21:01 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
{"level":"warn","ts":"2024-07-01T12:21:01.381902Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.16:2379: use of closed network connection"}
{"level":"warn","ts":"2024-07-01T12:21:01.38194Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.16:2379: use of closed network connection"}
{"level":"info","ts":"2024-07-01T12:21:01.38203Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b6c76b3131c1024","current-leader-member-id":"0"}
{"level":"info","ts":"2024-07-01T12:21:01.382382Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c77bbbee62c21090"}
{"level":"info","ts":"2024-07-01T12:21:01.382398Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c77bbbee62c21090"}
{"level":"info","ts":"2024-07-01T12:21:01.38247Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c77bbbee62c21090"}
{"level":"info","ts":"2024-07-01T12:21:01.382583Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b6c76b3131c1024","remote-peer-id":"c77bbbee62c21090"}
{"level":"info","ts":"2024-07-01T12:21:01.382685Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b6c76b3131c1024","remote-peer-id":"c77bbbee62c21090"}
{"level":"info","ts":"2024-07-01T12:21:01.382809Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b6c76b3131c1024","remote-peer-id":"c77bbbee62c21090"}
{"level":"info","ts":"2024-07-01T12:21:01.382826Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c77bbbee62c21090"}
{"level":"info","ts":"2024-07-01T12:21:01.382832Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"77557cf66c24e9ff"}
{"level":"info","ts":"2024-07-01T12:21:01.382882Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"77557cf66c24e9ff"}
{"level":"info","ts":"2024-07-01T12:21:01.3829Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"77557cf66c24e9ff"}
{"level":"info","ts":"2024-07-01T12:21:01.385706Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
{"level":"info","ts":"2024-07-01T12:21:01.385804Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
{"level":"info","ts":"2024-07-01T12:21:01.385838Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b6c76b3131c1024","remote-peer-id":"77557cf66c24e9ff"}
{"level":"info","ts":"2024-07-01T12:21:01.385849Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"77557cf66c24e9ff"}
{"level":"info","ts":"2024-07-01T12:21:01.406065Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.16:2380"}
{"level":"info","ts":"2024-07-01T12:21:01.406193Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.16:2380"}
{"level":"info","ts":"2024-07-01T12:21:01.406214Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-735960","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.16:2380"],"advertise-client-urls":["https://192.168.39.16:2379"]}
==> etcd [6a200a6b4902] <==
{"level":"info","ts":"2024-07-01T12:22:54.688918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
{"level":"info","ts":"2024-07-01T12:22:54.689365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
{"level":"info","ts":"2024-07-01T12:22:54.689616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
{"level":"info","ts":"2024-07-01T12:22:54.689896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
{"level":"info","ts":"2024-07-01T12:22:54.689984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
{"level":"warn","ts":"2024-07-01T12:22:54.766483Z","caller":"etcdserver/server.go:2089","msg":"failed to publish local member to cluster through raft","local-member-id":"b6c76b3131c1024","local-member-attributes":"{Name:ha-735960 ClientURLs:[https://192.168.39.16:2379]}","request-path":"/0/members/b6c76b3131c1024/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
{"level":"warn","ts":"2024-07-01T12:22:54.810935Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c77bbbee62c21090","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
{"level":"warn","ts":"2024-07-01T12:22:54.81101Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c77bbbee62c21090","rtt":"0s","error":"dial tcp 192.168.39.86:2380: connect: connection refused"}
{"level":"warn","ts":"2024-07-01T12:22:54.827555Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: i/o timeout"}
{"level":"warn","ts":"2024-07-01T12:22:54.827561Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"77557cf66c24e9ff","rtt":"0s","error":"dial tcp 192.168.39.97:2380: i/o timeout"}
{"level":"info","ts":"2024-07-01T12:22:56.088711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
{"level":"info","ts":"2024-07-01T12:22:56.088779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
{"level":"info","ts":"2024-07-01T12:22:56.088792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
{"level":"info","ts":"2024-07-01T12:22:56.088806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
{"level":"info","ts":"2024-07-01T12:22:56.088813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
{"level":"info","ts":"2024-07-01T12:22:57.488845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
{"level":"info","ts":"2024-07-01T12:22:57.488894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
{"level":"info","ts":"2024-07-01T12:22:57.488907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
{"level":"info","ts":"2024-07-01T12:22:57.488922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
{"level":"info","ts":"2024-07-01T12:22:57.488929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
{"level":"info","ts":"2024-07-01T12:22:58.888088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
{"level":"info","ts":"2024-07-01T12:22:58.888193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
{"level":"info","ts":"2024-07-01T12:22:58.888234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
{"level":"info","ts":"2024-07-01T12:22:58.888281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to 77557cf66c24e9ff at term 2"}
{"level":"info","ts":"2024-07-01T12:22:58.888295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 [logterm: 2, index: 1880] sent MsgPreVote request to c77bbbee62c21090 at term 2"}
==> kernel <==
12:22:59 up 1 min, 0 users, load average: 0.14, 0.07, 0.02
Linux ha-735960 5.10.207 #1 SMP Wed Jun 26 19:37:34 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kindnet [f472aef5302f] <==
I0701 12:20:12.428842 1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24]
I0701 12:20:22.443154 1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
I0701 12:20:22.443292 1 main.go:227] handling current node
I0701 12:20:22.443323 1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
I0701 12:20:22.443388 1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24]
I0701 12:20:22.443605 1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
I0701 12:20:22.443653 1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24]
I0701 12:20:22.443793 1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
I0701 12:20:22.443836 1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24]
I0701 12:20:32.451395 1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
I0701 12:20:32.451431 1 main.go:227] handling current node
I0701 12:20:32.451481 1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
I0701 12:20:32.451486 1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24]
I0701 12:20:32.451947 1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
I0701 12:20:32.451980 1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24]
I0701 12:20:32.452873 1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
I0701 12:20:32.453015 1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24]
I0701 12:20:42.470169 1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
I0701 12:20:42.470264 1 main.go:227] handling current node
I0701 12:20:42.470289 1 main.go:223] Handling node with IPs: map[192.168.39.86:{}]
I0701 12:20:42.470302 1 main.go:250] Node ha-735960-m02 has CIDR [10.244.1.0/24]
I0701 12:20:42.470523 1 main.go:223] Handling node with IPs: map[192.168.39.97:{}]
I0701 12:20:42.470616 1 main.go:250] Node ha-735960-m03 has CIDR [10.244.2.0/24]
I0701 12:20:42.470868 1 main.go:223] Handling node with IPs: map[192.168.39.60:{}]
I0701 12:20:42.470914 1 main.go:250] Node ha-735960-m04 has CIDR [10.244.3.0/24]
==> kube-apiserver [e546c39248bc] <==
I0701 12:22:27.228496 1 options.go:221] external host was not specified, using 192.168.39.16
I0701 12:22:27.229584 1 server.go:148] Version: v1.30.2
I0701 12:22:27.229706 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0701 12:22:27.544729 1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
I0701 12:22:27.547846 1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
I0701 12:22:27.551600 1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0701 12:22:27.551634 1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
I0701 12:22:27.551982 1 instance.go:299] Using reconciler: lease
W0701 12:22:47.544372 1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
W0701 12:22:47.544664 1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
F0701 12:22:47.553171 1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
==> kube-controller-manager [829fe19c75ce] <==
I0701 12:22:24.521097 1 serving.go:380] Generated self-signed cert in-memory
I0701 12:22:24.837441 1 controllermanager.go:189] "Starting" version="v1.30.2"
I0701 12:22:24.837478 1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0701 12:22:24.839276 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0701 12:22:24.839470 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0701 12:22:24.839988 1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
I0701 12:22:24.840049 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
E0701 12:22:48.561111 1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.16:8443/healthz\": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:57228->192.168.39.16:8443: read: connection reset by peer"
==> kube-proxy [6116abe6039d] <==
I0701 12:16:09.205590 1 server_linux.go:69] "Using iptables proxy"
I0701 12:16:09.223098 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
I0701 12:16:09.284088 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
I0701 12:16:09.284134 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0701 12:16:09.284152 1 server_linux.go:165] "Using iptables Proxier"
I0701 12:16:09.286802 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0701 12:16:09.287240 1 server.go:872] "Version info" version="v1.30.2"
I0701 12:16:09.287274 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0701 12:16:09.288803 1 config.go:192] "Starting service config controller"
I0701 12:16:09.288830 1 shared_informer.go:313] Waiting for caches to sync for service config
I0701 12:16:09.289262 1 config.go:101] "Starting endpoint slice config controller"
I0701 12:16:09.289283 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0701 12:16:09.290101 1 config.go:319] "Starting node config controller"
I0701 12:16:09.290125 1 shared_informer.go:313] Waiting for caches to sync for node config
I0701 12:16:09.389941 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0701 12:16:09.390030 1 shared_informer.go:320] Caches are synced for service config
I0701 12:16:09.390393 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [2d71437c5f06] <==
Trace[1841834859]: ---"Objects listed" error:Get "https://192.168.39.16:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:57242->192.168.39.16:8443: read: connection reset by peer 10642ms (12:22:48.563)
Trace[1841834859]: [10.642423199s] [10.642423199s] END
E0701 12:22:48.563438 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.16:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:57242->192.168.39.16:8443: read: connection reset by peer
W0701 12:22:48.563506 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.16:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59182->192.168.39.16:8443: read: connection reset by peer
E0701 12:22:48.563570 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.16:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59182->192.168.39.16:8443: read: connection reset by peer
W0701 12:22:48.563641 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59186->192.168.39.16:8443: read: connection reset by peer
E0701 12:22:48.563665 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.16:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59186->192.168.39.16:8443: read: connection reset by peer
W0701 12:22:48.563724 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.16:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59188->192.168.39.16:8443: read: connection reset by peer
E0701 12:22:48.563747 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.16:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59188->192.168.39.16:8443: read: connection reset by peer
W0701 12:22:48.563814 1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59202->192.168.39.16:8443: read: connection reset by peer
E0701 12:22:48.563830 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.16:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59202->192.168.39.16:8443: read: connection reset by peer
W0701 12:22:48.563886 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59238->192.168.39.16:8443: read: connection reset by peer
E0701 12:22:48.563907 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59238->192.168.39.16:8443: read: connection reset by peer
W0701 12:22:48.563967 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59262->192.168.39.16:8443: read: connection reset by peer
E0701 12:22:48.563982 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59262->192.168.39.16:8443: read: connection reset by peer
W0701 12:22:48.563997 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59210->192.168.39.16:8443: read: connection reset by peer
E0701 12:22:48.564229 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.16:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.16:59210->192.168.39.16:8443: read: connection reset by peer
W0701 12:22:48.669137 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.16:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
E0701 12:22:48.669192 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.16:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
W0701 12:22:51.792652 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
E0701 12:22:51.792757 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.16:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
W0701 12:22:52.248014 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
E0701 12:22:52.248063 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.16:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
W0701 12:22:55.201032 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.16:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
E0701 12:22:55.201141 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.16:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.16:8443: connect: connection refused
==> kube-scheduler [cb63d5441180] <==
W0701 12:15:50.916180 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0701 12:15:50.916379 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0701 12:15:51.752711 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0701 12:15:51.752853 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0701 12:15:51.794007 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0701 12:15:51.794055 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0701 12:15:51.931391 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0701 12:15:51.931434 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0701 12:15:51.950120 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0701 12:15:51.950162 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0701 12:15:51.968922 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0701 12:15:51.969125 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0701 12:15:51.985991 1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0701 12:15:51.986032 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0701 12:15:52.054298 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0701 12:15:52.054329 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0701 12:15:52.260873 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0701 12:15:52.260979 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
I0701 12:15:54.206866 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0701 12:19:09.710917 1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xv95g\": pod kube-proxy-xv95g is already assigned to node \"ha-735960-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xv95g" node="ha-735960-m04"
E0701 12:19:09.713930 1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xv95g\": pod kube-proxy-xv95g is already assigned to node \"ha-735960-m04\"" pod="kube-system/kube-proxy-xv95g"
I0701 12:21:01.200143 1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
I0701 12:21:01.200254 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I0701 12:21:01.200659 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E0701 12:21:01.212693 1 run.go:74] "command failed" err="finished without leader elect"
==> kubelet <==
Jul 01 12:22:42 ha-735960 kubelet[1610]: I0701 12:22:42.360672 1610 kubelet_node_status.go:73] "Attempting to register node" node="ha-735960"
Jul 01 12:22:44 ha-735960 kubelet[1610]: E0701 12:22:44.574795 1610 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.254:8443: connect: no route to host" node="ha-735960"
Jul 01 12:22:44 ha-735960 kubelet[1610]: E0701 12:22:44.574858 1610 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-735960?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
Jul 01 12:22:47 ha-735960 kubelet[1610]: E0701 12:22:47.092648 1610 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-735960\" not found"
Jul 01 12:22:47 ha-735960 kubelet[1610]: E0701 12:22:47.646121 1610 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-735960.17de162e90ad8f5f default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-735960,UID:ha-735960,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-735960,},FirstTimestamp:2024-07-01 12:21:36.953708383 +0000 UTC m=+0.183371310,LastTimestamp:2024-07-01 12:21:36.953708383 +0000 UTC m=+0.183371310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-735960,}"
Jul 01 12:22:48 ha-735960 kubelet[1610]: I0701 12:22:48.159877 1610 scope.go:117] "RemoveContainer" containerID="d97b6df80577316a9cf70b2af0f8d52bb2bd7071ff932a8f1f03df9497724786"
Jul 01 12:22:48 ha-735960 kubelet[1610]: I0701 12:22:48.161197 1610 scope.go:117] "RemoveContainer" containerID="e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30"
Jul 01 12:22:48 ha-735960 kubelet[1610]: E0701 12:22:48.162173 1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-735960_kube-system(858bfcad8b1d02b8cdc3dc83c4af060c)\"" pod="kube-system/kube-apiserver-ha-735960" podUID="858bfcad8b1d02b8cdc3dc83c4af060c"
Jul 01 12:22:49 ha-735960 kubelet[1610]: I0701 12:22:49.180032 1610 scope.go:117] "RemoveContainer" containerID="ad4259a9c8ee03ff4c6910c68c5c866481fede150d57267cdc957e46aca4ef80"
Jul 01 12:22:49 ha-735960 kubelet[1610]: I0701 12:22:49.181799 1610 scope.go:117] "RemoveContainer" containerID="829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d"
Jul 01 12:22:49 ha-735960 kubelet[1610]: E0701 12:22:49.182112 1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-735960_kube-system(9a545edc3c0d885e2370d3a24ff8ac4b)\"" pod="kube-system/kube-controller-manager-ha-735960" podUID="9a545edc3c0d885e2370d3a24ff8ac4b"
Jul 01 12:22:50 ha-735960 kubelet[1610]: I0701 12:22:50.089167 1610 scope.go:117] "RemoveContainer" containerID="e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30"
Jul 01 12:22:50 ha-735960 kubelet[1610]: E0701 12:22:50.089722 1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-735960_kube-system(858bfcad8b1d02b8cdc3dc83c4af060c)\"" pod="kube-system/kube-apiserver-ha-735960" podUID="858bfcad8b1d02b8cdc3dc83c4af060c"
Jul 01 12:22:50 ha-735960 kubelet[1610]: I0701 12:22:50.202365 1610 scope.go:117] "RemoveContainer" containerID="829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d"
Jul 01 12:22:50 ha-735960 kubelet[1610]: E0701 12:22:50.202700 1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-735960_kube-system(9a545edc3c0d885e2370d3a24ff8ac4b)\"" pod="kube-system/kube-controller-manager-ha-735960" podUID="9a545edc3c0d885e2370d3a24ff8ac4b"
Jul 01 12:22:51 ha-735960 kubelet[1610]: I0701 12:22:51.209935 1610 scope.go:117] "RemoveContainer" containerID="829fe19c75ce30a13d3a4e33a2e2a760477739dbe6f611f1b4e60d69d0444f4d"
Jul 01 12:22:51 ha-735960 kubelet[1610]: E0701 12:22:51.210647 1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-735960_kube-system(9a545edc3c0d885e2370d3a24ff8ac4b)\"" pod="kube-system/kube-controller-manager-ha-735960" podUID="9a545edc3c0d885e2370d3a24ff8ac4b"
Jul 01 12:22:51 ha-735960 kubelet[1610]: I0701 12:22:51.576067 1610 kubelet_node_status.go:73] "Attempting to register node" node="ha-735960"
Jul 01 12:22:53 ha-735960 kubelet[1610]: I0701 12:22:53.728933 1610 scope.go:117] "RemoveContainer" containerID="e546c39248bc8ab525701bacc0a354650e0da853981a4624abd25fdba1c1ca30"
Jul 01 12:22:53 ha-735960 kubelet[1610]: E0701 12:22:53.729329 1610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-735960_kube-system(858bfcad8b1d02b8cdc3dc83c4af060c)\"" pod="kube-system/kube-apiserver-ha-735960" podUID="858bfcad8b1d02b8cdc3dc83c4af060c"
Jul 01 12:22:53 ha-735960 kubelet[1610]: E0701 12:22:53.789831 1610 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.254:8443: connect: no route to host" node="ha-735960"
Jul 01 12:22:53 ha-735960 kubelet[1610]: E0701 12:22:53.790000 1610 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-735960?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
Jul 01 12:22:56 ha-735960 kubelet[1610]: W0701 12:22:56.862031 1610 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
Jul 01 12:22:56 ha-735960 kubelet[1610]: E0701 12:22:56.862122 1610 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
Jul 01 12:22:57 ha-735960 kubelet[1610]: E0701 12:22:57.094040 1610 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-735960\" not found"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-735960 -n ha-735960
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-735960 -n ha-735960: exit status 2 (229.492615ms)
-- stdout --
Stopped
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-735960" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (146.89s)