=== RUN TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run: out/minikube-linux-amd64 node list -p ha-046009 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run: out/minikube-linux-amd64 stop -p ha-046009 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-046009 -v=7 --alsologtostderr: (42.377087867s)
ha_test.go:469: (dbg) Run: out/minikube-linux-amd64 start -p ha-046009 --wait=true -v=7 --alsologtostderr
E0408 18:39:51.454487 546311 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-538981/.minikube/profiles/functional-171690/client.crt: no such file or directory" logger="UnhandledError"
E0408 18:41:09.450655 546311 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20604-538981/.minikube/profiles/addons-948010/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-046009 --wait=true -v=7 --alsologtostderr: exit status 90 (1m23.773417075s)
-- stdout --
* [ha-046009] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=20604
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20604-538981/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-538981/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
* Starting "ha-046009" primary control-plane node in "ha-046009" cluster
* Restarting existing kvm2 VM for "ha-046009" ...
-- /stdout --
** stderr **
I0408 18:39:47.701870 560229 out.go:345] Setting OutFile to fd 1 ...
I0408 18:39:47.702135 560229 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 18:39:47.702145 560229 out.go:358] Setting ErrFile to fd 2...
I0408 18:39:47.702150 560229 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 18:39:47.702358 560229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-538981/.minikube/bin
I0408 18:39:47.702956 560229 out.go:352] Setting JSON to false
I0408 18:39:47.703949 560229 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4937,"bootTime":1744132651,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0408 18:39:47.704005 560229 start.go:139] virtualization: kvm guest
I0408 18:39:47.705859 560229 out.go:177] * [ha-046009] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0408 18:39:47.706846 560229 out.go:177] - MINIKUBE_LOCATION=20604
I0408 18:39:47.706878 560229 notify.go:220] Checking for updates...
I0408 18:39:47.708745 560229 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0408 18:39:47.709767 560229 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20604-538981/kubeconfig
I0408 18:39:47.710656 560229 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-538981/.minikube
I0408 18:39:47.711677 560229 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0408 18:39:47.712669 560229 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0408 18:39:47.714154 560229 config.go:182] Loaded profile config "ha-046009": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0408 18:39:47.714252 560229 driver.go:394] Setting default libvirt URI to qemu:///system
I0408 18:39:47.714736 560229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0408 18:39:47.714817 560229 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:39:47.729839 560229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44573
I0408 18:39:47.730290 560229 main.go:141] libmachine: () Calling .GetVersion
I0408 18:39:47.730737 560229 main.go:141] libmachine: Using API Version 1
I0408 18:39:47.730768 560229 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:39:47.731109 560229 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:39:47.731285 560229 main.go:141] libmachine: (ha-046009) Calling .DriverName
I0408 18:39:47.765483 560229 out.go:177] * Using the kvm2 driver based on existing profile
I0408 18:39:47.766728 560229 start.go:297] selected driver: kvm2
I0408 18:39:47.766767 560229 start.go:901] validating driver "kvm2" against &{Name:ha-046009 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-04
6009 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.81 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.158 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.51 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fal
se efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0408 18:39:47.766941 560229 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0408 18:39:47.767391 560229 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0408 18:39:47.767481 560229 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20604-538981/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0408 18:39:47.783247 560229 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0408 18:39:47.784321 560229 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0408 18:39:47.784377 560229 cni.go:84] Creating CNI manager for ""
I0408 18:39:47.784459 560229 cni.go:136] multinode detected (4 nodes found), recommending kindnet
I0408 18:39:47.784550 560229 start.go:340] cluster config:
{Name:ha-046009 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-046009 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.81 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.158 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.51 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false i
naccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0408 18:39:47.784768 560229 iso.go:125] acquiring lock: {Name:mk4e70a1080c20df8ba5df6fb273e5bc7e5d343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0408 18:39:47.786518 560229 out.go:177] * Starting "ha-046009" primary control-plane node in "ha-046009" cluster
I0408 18:39:47.787667 560229 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0408 18:39:47.787712 560229 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20604-538981/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
I0408 18:39:47.787729 560229 cache.go:56] Caching tarball of preloaded images
I0408 18:39:47.787887 560229 preload.go:172] Found /home/jenkins/minikube-integration/20604-538981/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0408 18:39:47.787909 560229 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
I0408 18:39:47.788047 560229 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-538981/.minikube/profiles/ha-046009/config.json ...
I0408 18:39:47.788250 560229 start.go:360] acquireMachinesLock for ha-046009: {Name:mk475a46f9c399e0e51b6ae0f542534fa97d822c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0408 18:39:47.788295 560229 start.go:364] duration metric: took 26.979µs to acquireMachinesLock for "ha-046009"
I0408 18:39:47.788310 560229 start.go:96] Skipping create...Using existing machine configuration
I0408 18:39:47.788317 560229 fix.go:54] fixHost starting:
I0408 18:39:47.788554 560229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0408 18:39:47.788584 560229 main.go:141] libmachine: Launching plugin server for driver kvm2
I0408 18:39:47.803408 560229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36963
I0408 18:39:47.803934 560229 main.go:141] libmachine: () Calling .GetVersion
I0408 18:39:47.804441 560229 main.go:141] libmachine: Using API Version 1
I0408 18:39:47.804461 560229 main.go:141] libmachine: () Calling .SetConfigRaw
I0408 18:39:47.804874 560229 main.go:141] libmachine: () Calling .GetMachineName
I0408 18:39:47.805066 560229 main.go:141] libmachine: (ha-046009) Calling .DriverName
I0408 18:39:47.805280 560229 main.go:141] libmachine: (ha-046009) Calling .GetState
I0408 18:39:47.806819 560229 fix.go:112] recreateIfNeeded on ha-046009: state=Stopped err=<nil>
I0408 18:39:47.806849 560229 main.go:141] libmachine: (ha-046009) Calling .DriverName
W0408 18:39:47.807005 560229 fix.go:138] unexpected machine state, will restart: <nil>
I0408 18:39:47.808790 560229 out.go:177] * Restarting existing kvm2 VM for "ha-046009" ...
I0408 18:39:47.809921 560229 main.go:141] libmachine: (ha-046009) Calling .Start
I0408 18:39:47.810084 560229 main.go:141] libmachine: (ha-046009) starting domain...
I0408 18:39:47.810104 560229 main.go:141] libmachine: (ha-046009) ensuring networks are active...
I0408 18:39:47.810797 560229 main.go:141] libmachine: (ha-046009) Ensuring network default is active
I0408 18:39:47.811108 560229 main.go:141] libmachine: (ha-046009) Ensuring network mk-ha-046009 is active
I0408 18:39:47.811451 560229 main.go:141] libmachine: (ha-046009) getting domain XML...
I0408 18:39:47.812180 560229 main.go:141] libmachine: (ha-046009) creating domain...
I0408 18:39:48.998527 560229 main.go:141] libmachine: (ha-046009) waiting for IP...
I0408 18:39:48.999378 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:39:48.999710 560229 main.go:141] libmachine: (ha-046009) DBG | unable to find current IP address of domain ha-046009 in network mk-ha-046009
I0408 18:39:48.999834 560229 main.go:141] libmachine: (ha-046009) DBG | I0408 18:39:48.999744 560259 retry.go:31] will retry after 312.373823ms: waiting for domain to come up
I0408 18:39:49.313482 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:39:49.313898 560229 main.go:141] libmachine: (ha-046009) DBG | unable to find current IP address of domain ha-046009 in network mk-ha-046009
I0408 18:39:49.313954 560229 main.go:141] libmachine: (ha-046009) DBG | I0408 18:39:49.313889 560259 retry.go:31] will retry after 291.094063ms: waiting for domain to come up
I0408 18:39:49.606425 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:39:49.606826 560229 main.go:141] libmachine: (ha-046009) DBG | unable to find current IP address of domain ha-046009 in network mk-ha-046009
I0408 18:39:49.606852 560229 main.go:141] libmachine: (ha-046009) DBG | I0408 18:39:49.606796 560259 retry.go:31] will retry after 485.195917ms: waiting for domain to come up
I0408 18:39:50.093408 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:39:50.093818 560229 main.go:141] libmachine: (ha-046009) DBG | unable to find current IP address of domain ha-046009 in network mk-ha-046009
I0408 18:39:50.093847 560229 main.go:141] libmachine: (ha-046009) DBG | I0408 18:39:50.093789 560259 retry.go:31] will retry after 557.05202ms: waiting for domain to come up
I0408 18:39:50.652550 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:39:50.652970 560229 main.go:141] libmachine: (ha-046009) DBG | unable to find current IP address of domain ha-046009 in network mk-ha-046009
I0408 18:39:50.652998 560229 main.go:141] libmachine: (ha-046009) DBG | I0408 18:39:50.652942 560259 retry.go:31] will retry after 563.911461ms: waiting for domain to come up
I0408 18:39:51.218672 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:39:51.219024 560229 main.go:141] libmachine: (ha-046009) DBG | unable to find current IP address of domain ha-046009 in network mk-ha-046009
I0408 18:39:51.219053 560229 main.go:141] libmachine: (ha-046009) DBG | I0408 18:39:51.219001 560259 retry.go:31] will retry after 596.928439ms: waiting for domain to come up
I0408 18:39:51.817802 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:39:51.818191 560229 main.go:141] libmachine: (ha-046009) DBG | unable to find current IP address of domain ha-046009 in network mk-ha-046009
I0408 18:39:51.818222 560229 main.go:141] libmachine: (ha-046009) DBG | I0408 18:39:51.818167 560259 retry.go:31] will retry after 806.318667ms: waiting for domain to come up
I0408 18:39:52.625952 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:39:52.626290 560229 main.go:141] libmachine: (ha-046009) DBG | unable to find current IP address of domain ha-046009 in network mk-ha-046009
I0408 18:39:52.626320 560229 main.go:141] libmachine: (ha-046009) DBG | I0408 18:39:52.626252 560259 retry.go:31] will retry after 1.072489409s: waiting for domain to come up
I0408 18:39:53.700379 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:39:53.700814 560229 main.go:141] libmachine: (ha-046009) DBG | unable to find current IP address of domain ha-046009 in network mk-ha-046009
I0408 18:39:53.700843 560229 main.go:141] libmachine: (ha-046009) DBG | I0408 18:39:53.700761 560259 retry.go:31] will retry after 1.322703709s: waiting for domain to come up
I0408 18:39:55.025275 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:39:55.025657 560229 main.go:141] libmachine: (ha-046009) DBG | unable to find current IP address of domain ha-046009 in network mk-ha-046009
I0408 18:39:55.025721 560229 main.go:141] libmachine: (ha-046009) DBG | I0408 18:39:55.025643 560259 retry.go:31] will retry after 2.32650543s: waiting for domain to come up
I0408 18:39:57.354086 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:39:57.354436 560229 main.go:141] libmachine: (ha-046009) DBG | unable to find current IP address of domain ha-046009 in network mk-ha-046009
I0408 18:39:57.354464 560229 main.go:141] libmachine: (ha-046009) DBG | I0408 18:39:57.354396 560259 retry.go:31] will retry after 2.281592177s: waiting for domain to come up
I0408 18:39:59.638958 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:39:59.639441 560229 main.go:141] libmachine: (ha-046009) DBG | unable to find current IP address of domain ha-046009 in network mk-ha-046009
I0408 18:39:59.639469 560229 main.go:141] libmachine: (ha-046009) DBG | I0408 18:39:59.639409 560259 retry.go:31] will retry after 3.376474622s: waiting for domain to come up
I0408 18:40:03.016978 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:03.017464 560229 main.go:141] libmachine: (ha-046009) DBG | unable to find current IP address of domain ha-046009 in network mk-ha-046009
I0408 18:40:03.017493 560229 main.go:141] libmachine: (ha-046009) DBG | I0408 18:40:03.017425 560259 retry.go:31] will retry after 2.968633868s: waiting for domain to come up
I0408 18:40:05.987320 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:05.987797 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has current primary IP address 192.168.39.167 and MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:05.987816 560229 main.go:141] libmachine: (ha-046009) found domain IP: 192.168.39.167
I0408 18:40:05.987835 560229 main.go:141] libmachine: (ha-046009) reserving static IP address...
I0408 18:40:05.988245 560229 main.go:141] libmachine: (ha-046009) DBG | found host DHCP lease matching {name: "ha-046009", mac: "52:54:00:23:86:da", ip: "192.168.39.167"} in network mk-ha-046009: {Iface:virbr1 ExpiryTime:2025-04-08 19:39:59 +0000 UTC Type:0 Mac:52:54:00:23:86:da Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-046009 Clientid:01:52:54:00:23:86:da}
I0408 18:40:05.988308 560229 main.go:141] libmachine: (ha-046009) reserved static IP address 192.168.39.167 for domain ha-046009
I0408 18:40:05.988336 560229 main.go:141] libmachine: (ha-046009) DBG | skip adding static IP to network mk-ha-046009 - found existing host DHCP lease matching {name: "ha-046009", mac: "52:54:00:23:86:da", ip: "192.168.39.167"}
I0408 18:40:05.988358 560229 main.go:141] libmachine: (ha-046009) DBG | Getting to WaitForSSH function...
I0408 18:40:05.988369 560229 main.go:141] libmachine: (ha-046009) waiting for SSH...
I0408 18:40:05.990399 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:05.990739 560229 main.go:141] libmachine: (ha-046009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:86:da", ip: ""} in network mk-ha-046009: {Iface:virbr1 ExpiryTime:2025-04-08 19:39:59 +0000 UTC Type:0 Mac:52:54:00:23:86:da Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-046009 Clientid:01:52:54:00:23:86:da}
I0408 18:40:05.990769 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined IP address 192.168.39.167 and MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:05.990841 560229 main.go:141] libmachine: (ha-046009) DBG | Using SSH client type: external
I0408 18:40:05.990882 560229 main.go:141] libmachine: (ha-046009) DBG | Using SSH private key: /home/jenkins/minikube-integration/20604-538981/.minikube/machines/ha-046009/id_rsa (-rw-------)
I0408 18:40:05.990914 560229 main.go:141] libmachine: (ha-046009) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20604-538981/.minikube/machines/ha-046009/id_rsa -p 22] /usr/bin/ssh <nil>}
I0408 18:40:05.990939 560229 main.go:141] libmachine: (ha-046009) DBG | About to run SSH command:
I0408 18:40:05.990954 560229 main.go:141] libmachine: (ha-046009) DBG | exit 0
I0408 18:40:06.115054 560229 main.go:141] libmachine: (ha-046009) DBG | SSH cmd err, output: <nil>:
I0408 18:40:06.115398 560229 main.go:141] libmachine: (ha-046009) Calling .GetConfigRaw
I0408 18:40:06.116062 560229 main.go:141] libmachine: (ha-046009) Calling .GetIP
I0408 18:40:06.118588 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:06.119018 560229 main.go:141] libmachine: (ha-046009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:86:da", ip: ""} in network mk-ha-046009: {Iface:virbr1 ExpiryTime:2025-04-08 19:39:59 +0000 UTC Type:0 Mac:52:54:00:23:86:da Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-046009 Clientid:01:52:54:00:23:86:da}
I0408 18:40:06.119059 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined IP address 192.168.39.167 and MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:06.119450 560229 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-538981/.minikube/profiles/ha-046009/config.json ...
I0408 18:40:06.119651 560229 machine.go:93] provisionDockerMachine start ...
I0408 18:40:06.119670 560229 main.go:141] libmachine: (ha-046009) Calling .DriverName
I0408 18:40:06.119933 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHHostname
I0408 18:40:06.122134 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:06.122501 560229 main.go:141] libmachine: (ha-046009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:86:da", ip: ""} in network mk-ha-046009: {Iface:virbr1 ExpiryTime:2025-04-08 19:39:59 +0000 UTC Type:0 Mac:52:54:00:23:86:da Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-046009 Clientid:01:52:54:00:23:86:da}
I0408 18:40:06.122520 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined IP address 192.168.39.167 and MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:06.122673 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHPort
I0408 18:40:06.122854 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:06.123056 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:06.123188 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHUsername
I0408 18:40:06.123438 560229 main.go:141] libmachine: Using SSH client type: native
I0408 18:40:06.123655 560229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.167 22 <nil> <nil>}
I0408 18:40:06.123666 560229 main.go:141] libmachine: About to run SSH command:
hostname
I0408 18:40:06.231488 560229 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0408 18:40:06.231514 560229 main.go:141] libmachine: (ha-046009) Calling .GetMachineName
I0408 18:40:06.231762 560229 buildroot.go:166] provisioning hostname "ha-046009"
I0408 18:40:06.231793 560229 main.go:141] libmachine: (ha-046009) Calling .GetMachineName
I0408 18:40:06.231998 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHHostname
I0408 18:40:06.234814 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:06.235221 560229 main.go:141] libmachine: (ha-046009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:86:da", ip: ""} in network mk-ha-046009: {Iface:virbr1 ExpiryTime:2025-04-08 19:39:59 +0000 UTC Type:0 Mac:52:54:00:23:86:da Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-046009 Clientid:01:52:54:00:23:86:da}
I0408 18:40:06.235267 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined IP address 192.168.39.167 and MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:06.235404 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHPort
I0408 18:40:06.235602 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:06.235778 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:06.235900 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHUsername
I0408 18:40:06.236043 560229 main.go:141] libmachine: Using SSH client type: native
I0408 18:40:06.236289 560229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.167 22 <nil> <nil>}
I0408 18:40:06.236302 560229 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-046009 && echo "ha-046009" | sudo tee /etc/hostname
I0408 18:40:06.356943 560229 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-046009
I0408 18:40:06.356973 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHHostname
I0408 18:40:06.359578 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:06.360021 560229 main.go:141] libmachine: (ha-046009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:86:da", ip: ""} in network mk-ha-046009: {Iface:virbr1 ExpiryTime:2025-04-08 19:39:59 +0000 UTC Type:0 Mac:52:54:00:23:86:da Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-046009 Clientid:01:52:54:00:23:86:da}
I0408 18:40:06.360060 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined IP address 192.168.39.167 and MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:06.360202 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHPort
I0408 18:40:06.360368 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:06.360567 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:06.360699 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHUsername
I0408 18:40:06.360846 560229 main.go:141] libmachine: Using SSH client type: native
I0408 18:40:06.361028 560229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.167 22 <nil> <nil>}
I0408 18:40:06.361042 560229 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-046009' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-046009/g' /etc/hosts;
else
echo '127.0.1.1 ha-046009' | sudo tee -a /etc/hosts;
fi
fi
I0408 18:40:06.475716 560229 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0408 18:40:06.475755 560229 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20604-538981/.minikube CaCertPath:/home/jenkins/minikube-integration/20604-538981/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20604-538981/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20604-538981/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20604-538981/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20604-538981/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20604-538981/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20604-538981/.minikube}
I0408 18:40:06.475802 560229 buildroot.go:174] setting up certificates
I0408 18:40:06.475816 560229 provision.go:84] configureAuth start
I0408 18:40:06.475834 560229 main.go:141] libmachine: (ha-046009) Calling .GetMachineName
I0408 18:40:06.476103 560229 main.go:141] libmachine: (ha-046009) Calling .GetIP
I0408 18:40:06.478485 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:06.478835 560229 main.go:141] libmachine: (ha-046009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:86:da", ip: ""} in network mk-ha-046009: {Iface:virbr1 ExpiryTime:2025-04-08 19:39:59 +0000 UTC Type:0 Mac:52:54:00:23:86:da Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-046009 Clientid:01:52:54:00:23:86:da}
I0408 18:40:06.478873 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined IP address 192.168.39.167 and MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:06.478976 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHHostname
I0408 18:40:06.480860 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:06.481170 560229 main.go:141] libmachine: (ha-046009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:86:da", ip: ""} in network mk-ha-046009: {Iface:virbr1 ExpiryTime:2025-04-08 19:39:59 +0000 UTC Type:0 Mac:52:54:00:23:86:da Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-046009 Clientid:01:52:54:00:23:86:da}
I0408 18:40:06.481210 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined IP address 192.168.39.167 and MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:06.481371 560229 provision.go:143] copyHostCerts
I0408 18:40:06.481400 560229 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20604-538981/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20604-538981/.minikube/cert.pem
I0408 18:40:06.481428 560229 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-538981/.minikube/cert.pem, removing ...
I0408 18:40:06.481445 560229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-538981/.minikube/cert.pem
I0408 18:40:06.481509 560229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-538981/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20604-538981/.minikube/cert.pem (1123 bytes)
I0408 18:40:06.481602 560229 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20604-538981/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20604-538981/.minikube/key.pem
I0408 18:40:06.481618 560229 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-538981/.minikube/key.pem, removing ...
I0408 18:40:06.481624 560229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-538981/.minikube/key.pem
I0408 18:40:06.481647 560229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-538981/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20604-538981/.minikube/key.pem (1675 bytes)
I0408 18:40:06.481699 560229 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20604-538981/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20604-538981/.minikube/ca.pem
I0408 18:40:06.481721 560229 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-538981/.minikube/ca.pem, removing ...
I0408 18:40:06.481728 560229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-538981/.minikube/ca.pem
I0408 18:40:06.481748 560229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-538981/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20604-538981/.minikube/ca.pem (1082 bytes)
I0408 18:40:06.481808 560229 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20604-538981/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20604-538981/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20604-538981/.minikube/certs/ca-key.pem org=jenkins.ha-046009 san=[127.0.0.1 192.168.39.167 ha-046009 localhost minikube]
I0408 18:40:06.628421 560229 provision.go:177] copyRemoteCerts
I0408 18:40:06.628484 560229 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0408 18:40:06.628509 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHHostname
I0408 18:40:06.631045 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:06.631396 560229 main.go:141] libmachine: (ha-046009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:86:da", ip: ""} in network mk-ha-046009: {Iface:virbr1 ExpiryTime:2025-04-08 19:39:59 +0000 UTC Type:0 Mac:52:54:00:23:86:da Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-046009 Clientid:01:52:54:00:23:86:da}
I0408 18:40:06.631416 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined IP address 192.168.39.167 and MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:06.631614 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHPort
I0408 18:40:06.631819 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:06.631937 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHUsername
I0408 18:40:06.632036 560229 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-538981/.minikube/machines/ha-046009/id_rsa Username:docker}
I0408 18:40:06.712850 560229 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20604-538981/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0408 18:40:06.712919 560229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-538981/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0408 18:40:06.735822 560229 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20604-538981/.minikube/machines/server.pem -> /etc/docker/server.pem
I0408 18:40:06.735881 560229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-538981/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0408 18:40:06.758349 560229 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20604-538981/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0408 18:40:06.758393 560229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-538981/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0408 18:40:06.780509 560229 provision.go:87] duration metric: took 304.676726ms to configureAuth
I0408 18:40:06.780531 560229 buildroot.go:189] setting minikube options for container-runtime
I0408 18:40:06.780735 560229 config.go:182] Loaded profile config "ha-046009": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0408 18:40:06.780765 560229 main.go:141] libmachine: (ha-046009) Calling .DriverName
I0408 18:40:06.781027 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHHostname
I0408 18:40:06.783638 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:06.784039 560229 main.go:141] libmachine: (ha-046009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:86:da", ip: ""} in network mk-ha-046009: {Iface:virbr1 ExpiryTime:2025-04-08 19:39:59 +0000 UTC Type:0 Mac:52:54:00:23:86:da Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-046009 Clientid:01:52:54:00:23:86:da}
I0408 18:40:06.784066 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined IP address 192.168.39.167 and MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:06.784244 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHPort
I0408 18:40:06.784419 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:06.784547 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:06.784647 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHUsername
I0408 18:40:06.784784 560229 main.go:141] libmachine: Using SSH client type: native
I0408 18:40:06.784981 560229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.167 22 <nil> <nil>}
I0408 18:40:06.784992 560229 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0408 18:40:06.892404 560229 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0408 18:40:06.892431 560229 buildroot.go:70] root file system type: tmpfs
I0408 18:40:06.892593 560229 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0408 18:40:06.892616 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHHostname
I0408 18:40:06.895189 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:06.895480 560229 main.go:141] libmachine: (ha-046009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:86:da", ip: ""} in network mk-ha-046009: {Iface:virbr1 ExpiryTime:2025-04-08 19:39:59 +0000 UTC Type:0 Mac:52:54:00:23:86:da Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-046009 Clientid:01:52:54:00:23:86:da}
I0408 18:40:06.895523 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined IP address 192.168.39.167 and MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:06.895698 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHPort
I0408 18:40:06.895889 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:06.896075 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:06.896233 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHUsername
I0408 18:40:06.896383 560229 main.go:141] libmachine: Using SSH client type: native
I0408 18:40:06.896662 560229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.167 22 <nil> <nil>}
I0408 18:40:06.896755 560229 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0408 18:40:07.016747 560229 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0408 18:40:07.016780 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHHostname
I0408 18:40:07.019448 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:07.019821 560229 main.go:141] libmachine: (ha-046009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:86:da", ip: ""} in network mk-ha-046009: {Iface:virbr1 ExpiryTime:2025-04-08 19:39:59 +0000 UTC Type:0 Mac:52:54:00:23:86:da Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-046009 Clientid:01:52:54:00:23:86:da}
I0408 18:40:07.019855 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined IP address 192.168.39.167 and MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:07.020010 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHPort
I0408 18:40:07.020179 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:07.020293 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:07.020435 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHUsername
I0408 18:40:07.020644 560229 main.go:141] libmachine: Using SSH client type: native
I0408 18:40:07.020882 560229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.167 22 <nil> <nil>}
I0408 18:40:07.020908 560229 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0408 18:40:08.986022 560229 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0408 18:40:08.986056 560229 machine.go:96] duration metric: took 2.866390525s to provisionDockerMachine
I0408 18:40:08.986072 560229 start.go:293] postStartSetup for "ha-046009" (driver="kvm2")
I0408 18:40:08.986082 560229 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0408 18:40:08.986108 560229 main.go:141] libmachine: (ha-046009) Calling .DriverName
I0408 18:40:08.986445 560229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0408 18:40:08.986476 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHHostname
I0408 18:40:08.989052 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:08.989425 560229 main.go:141] libmachine: (ha-046009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:86:da", ip: ""} in network mk-ha-046009: {Iface:virbr1 ExpiryTime:2025-04-08 19:39:59 +0000 UTC Type:0 Mac:52:54:00:23:86:da Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-046009 Clientid:01:52:54:00:23:86:da}
I0408 18:40:08.989450 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined IP address 192.168.39.167 and MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:08.989589 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHPort
I0408 18:40:08.989772 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:08.989913 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHUsername
I0408 18:40:08.990023 560229 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-538981/.minikube/machines/ha-046009/id_rsa Username:docker}
I0408 18:40:09.073336 560229 ssh_runner.go:195] Run: cat /etc/os-release
I0408 18:40:09.077392 560229 info.go:137] Remote host: Buildroot 2023.02.9
I0408 18:40:09.077414 560229 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-538981/.minikube/addons for local assets ...
I0408 18:40:09.077481 560229 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-538981/.minikube/files for local assets ...
I0408 18:40:09.077600 560229 filesync.go:149] local asset: /home/jenkins/minikube-integration/20604-538981/.minikube/files/etc/ssl/certs/5463112.pem -> 5463112.pem in /etc/ssl/certs
I0408 18:40:09.077615 560229 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20604-538981/.minikube/files/etc/ssl/certs/5463112.pem -> /etc/ssl/certs/5463112.pem
I0408 18:40:09.077727 560229 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0408 18:40:09.086996 560229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-538981/.minikube/files/etc/ssl/certs/5463112.pem --> /etc/ssl/certs/5463112.pem (1708 bytes)
I0408 18:40:09.109395 560229 start.go:296] duration metric: took 123.311908ms for postStartSetup
I0408 18:40:09.109438 560229 main.go:141] libmachine: (ha-046009) Calling .DriverName
I0408 18:40:09.109704 560229 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0408 18:40:09.109728 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHHostname
I0408 18:40:09.112113 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:09.112449 560229 main.go:141] libmachine: (ha-046009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:86:da", ip: ""} in network mk-ha-046009: {Iface:virbr1 ExpiryTime:2025-04-08 19:39:59 +0000 UTC Type:0 Mac:52:54:00:23:86:da Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-046009 Clientid:01:52:54:00:23:86:da}
I0408 18:40:09.112478 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined IP address 192.168.39.167 and MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:09.112570 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHPort
I0408 18:40:09.112745 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:09.112892 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHUsername
I0408 18:40:09.113018 560229 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-538981/.minikube/machines/ha-046009/id_rsa Username:docker}
I0408 18:40:09.197484 560229 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
I0408 18:40:09.197572 560229 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0408 18:40:09.256027 560229 fix.go:56] duration metric: took 21.467698002s for fixHost
I0408 18:40:09.256076 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHHostname
I0408 18:40:09.258664 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:09.259048 560229 main.go:141] libmachine: (ha-046009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:86:da", ip: ""} in network mk-ha-046009: {Iface:virbr1 ExpiryTime:2025-04-08 19:39:59 +0000 UTC Type:0 Mac:52:54:00:23:86:da Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-046009 Clientid:01:52:54:00:23:86:da}
I0408 18:40:09.259079 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined IP address 192.168.39.167 and MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:09.259382 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHPort
I0408 18:40:09.259658 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:09.259867 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:09.260004 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHUsername
I0408 18:40:09.260205 560229 main.go:141] libmachine: Using SSH client type: native
I0408 18:40:09.260457 560229 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.167 22 <nil> <nil>}
I0408 18:40:09.260470 560229 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0408 18:40:09.367911 560229 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744137609.343853711
I0408 18:40:09.367941 560229 fix.go:216] guest clock: 1744137609.343853711
I0408 18:40:09.367974 560229 fix.go:229] Guest: 2025-04-08 18:40:09.343853711 +0000 UTC Remote: 2025-04-08 18:40:09.256056862 +0000 UTC m=+21.591085512 (delta=87.796849ms)
I0408 18:40:09.368031 560229 fix.go:200] guest clock delta is within tolerance: 87.796849ms
I0408 18:40:09.368043 560229 start.go:83] releasing machines lock for "ha-046009", held for 21.579736841s
I0408 18:40:09.368086 560229 main.go:141] libmachine: (ha-046009) Calling .DriverName
I0408 18:40:09.368346 560229 main.go:141] libmachine: (ha-046009) Calling .GetIP
I0408 18:40:09.370865 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:09.371243 560229 main.go:141] libmachine: (ha-046009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:86:da", ip: ""} in network mk-ha-046009: {Iface:virbr1 ExpiryTime:2025-04-08 19:39:59 +0000 UTC Type:0 Mac:52:54:00:23:86:da Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-046009 Clientid:01:52:54:00:23:86:da}
I0408 18:40:09.371273 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined IP address 192.168.39.167 and MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:09.371439 560229 main.go:141] libmachine: (ha-046009) Calling .DriverName
I0408 18:40:09.371935 560229 main.go:141] libmachine: (ha-046009) Calling .DriverName
I0408 18:40:09.372097 560229 main.go:141] libmachine: (ha-046009) Calling .DriverName
I0408 18:40:09.372171 560229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0408 18:40:09.372217 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHHostname
I0408 18:40:09.372272 560229 ssh_runner.go:195] Run: cat /version.json
I0408 18:40:09.372297 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHHostname
I0408 18:40:09.374732 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:09.374989 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:09.375084 560229 main.go:141] libmachine: (ha-046009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:86:da", ip: ""} in network mk-ha-046009: {Iface:virbr1 ExpiryTime:2025-04-08 19:39:59 +0000 UTC Type:0 Mac:52:54:00:23:86:da Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-046009 Clientid:01:52:54:00:23:86:da}
I0408 18:40:09.375122 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined IP address 192.168.39.167 and MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:09.375236 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHPort
I0408 18:40:09.375326 560229 main.go:141] libmachine: (ha-046009) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:86:da", ip: ""} in network mk-ha-046009: {Iface:virbr1 ExpiryTime:2025-04-08 19:39:59 +0000 UTC Type:0 Mac:52:54:00:23:86:da Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-046009 Clientid:01:52:54:00:23:86:da}
I0408 18:40:09.375368 560229 main.go:141] libmachine: (ha-046009) DBG | domain ha-046009 has defined IP address 192.168.39.167 and MAC address 52:54:00:23:86:da in network mk-ha-046009
I0408 18:40:09.375402 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:09.375537 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHPort
I0408 18:40:09.375682 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHKeyPath
I0408 18:40:09.375694 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHUsername
I0408 18:40:09.375845 560229 main.go:141] libmachine: (ha-046009) Calling .GetSSHUsername
I0408 18:40:09.375851 560229 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-538981/.minikube/machines/ha-046009/id_rsa Username:docker}
I0408 18:40:09.375948 560229 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20604-538981/.minikube/machines/ha-046009/id_rsa Username:docker}
I0408 18:40:09.463971 560229 ssh_runner.go:195] Run: systemctl --version
I0408 18:40:09.484161 560229 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0408 18:40:09.490018 560229 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0408 18:40:09.490092 560229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0408 18:40:09.513004 560229 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0408 18:40:09.513032 560229 start.go:495] detecting cgroup driver to use...
I0408 18:40:09.513171 560229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0408 18:40:09.535033 560229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0408 18:40:09.545396 560229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0408 18:40:09.555295 560229 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0408 18:40:09.555383 560229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0408 18:40:09.565316 560229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0408 18:40:09.575386 560229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0408 18:40:09.585294 560229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0408 18:40:09.595189 560229 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0408 18:40:09.605838 560229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0408 18:40:09.615892 560229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0408 18:40:09.626658 560229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0408 18:40:09.637359 560229 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0408 18:40:09.646906 560229 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0408 18:40:09.646949 560229 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0408 18:40:09.657439 560229 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0408 18:40:09.666857 560229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0408 18:40:09.777919 560229 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0408 18:40:09.802366 560229 start.go:495] detecting cgroup driver to use...
I0408 18:40:09.802474 560229 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0408 18:40:09.825573 560229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0408 18:40:09.841562 560229 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0408 18:40:09.865813 560229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0408 18:40:09.879542 560229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0408 18:40:09.892166 560229 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0408 18:40:09.915812 560229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0408 18:40:09.928534 560229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0408 18:40:09.945930 560229 ssh_runner.go:195] Run: which cri-dockerd
I0408 18:40:09.949599 560229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0408 18:40:09.958757 560229 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0408 18:40:09.974806 560229 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0408 18:40:10.094096 560229 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0408 18:40:10.208887 560229 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0408 18:40:10.209011 560229 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0408 18:40:10.226205 560229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0408 18:40:10.339486 560229 ssh_runner.go:195] Run: sudo systemctl restart docker
I0408 18:41:11.401887 560229 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.062355292s)
I0408 18:41:11.402020 560229 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0408 18:41:11.423494 560229 out.go:201]
W0408 18:41:11.425166 560229 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
Apr 08 18:40:07 ha-046009 systemd[1]: Starting Docker Application Container Engine...
Apr 08 18:40:07 ha-046009 dockerd[496]: time="2025-04-08T18:40:07.371263252Z" level=info msg="Starting up"
Apr 08 18:40:07 ha-046009 dockerd[496]: time="2025-04-08T18:40:07.371985889Z" level=info msg="containerd not running, starting managed containerd"
Apr 08 18:40:07 ha-046009 dockerd[496]: time="2025-04-08T18:40:07.372571593Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=503
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.401695793Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.422126406Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.422236515Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.422321541Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.422361322Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.422639068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.422789450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.422972847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.423019898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.423058754Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.423095966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.423347616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.423742212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.426057757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.426146811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.426320538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.426365803Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.426758955Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.426843169Z" level=info msg="metadata content store policy set" policy=shared
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.429291402Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.429483607Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.429605800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.429654164Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.429741411Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.429829919Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430356065Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430512607Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430567972Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430622513Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430663285Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430767937Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430807681Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430859064Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430898810Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430935412Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430972741Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431008451Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431068954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431111354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431161409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431205826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431247402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431284851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431327375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431368958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431406507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431449434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431489267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431526102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431562767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431601355Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431654795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431691360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431789483Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431901675Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.432154612Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.432208821Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.432252995Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.432288307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.432329481Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.432374413Z" level=info msg="NRI interface is disabled by configuration."
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.432829861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.433059756Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.433155391Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.433292398Z" level=info msg="containerd successfully booted in 0.033410s"
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.399830353Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.446970347Z" level=info msg="Loading containers: start."
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.770755118Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.852267248Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.921085442Z" level=info msg="Loading containers: done."
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.933552304Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.933588936Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.933613090Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.933826594Z" level=info msg="Daemon has completed initialization"
Apr 08 18:40:08 ha-046009 systemd[1]: Started Docker Application Container Engine.
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.960191907Z" level=info msg="API listen on /var/run/docker.sock"
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.960240047Z" level=info msg="API listen on [::]:2376"
Apr 08 18:40:10 ha-046009 dockerd[496]: time="2025-04-08T18:40:10.330092363Z" level=info msg="Processing signal 'terminated'"
Apr 08 18:40:10 ha-046009 dockerd[496]: time="2025-04-08T18:40:10.331239176Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 08 18:40:10 ha-046009 dockerd[496]: time="2025-04-08T18:40:10.331767733Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 08 18:40:10 ha-046009 dockerd[496]: time="2025-04-08T18:40:10.332035996Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 08 18:40:10 ha-046009 systemd[1]: Stopping Docker Application Container Engine...
Apr 08 18:40:10 ha-046009 dockerd[496]: time="2025-04-08T18:40:10.332195099Z" level=info msg="Daemon shutdown complete"
Apr 08 18:40:11 ha-046009 systemd[1]: docker.service: Deactivated successfully.
Apr 08 18:40:11 ha-046009 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 18:40:11 ha-046009 systemd[1]: Starting Docker Application Container Engine...
Apr 08 18:40:11 ha-046009 dockerd[1112]: time="2025-04-08T18:40:11.369522811Z" level=info msg="Starting up"
Apr 08 18:41:11 ha-046009 dockerd[1112]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 18:41:11 ha-046009 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 18:41:11 ha-046009 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 18:41:11 ha-046009 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
Apr 08 18:40:07 ha-046009 systemd[1]: Starting Docker Application Container Engine...
Apr 08 18:40:07 ha-046009 dockerd[496]: time="2025-04-08T18:40:07.371263252Z" level=info msg="Starting up"
Apr 08 18:40:07 ha-046009 dockerd[496]: time="2025-04-08T18:40:07.371985889Z" level=info msg="containerd not running, starting managed containerd"
Apr 08 18:40:07 ha-046009 dockerd[496]: time="2025-04-08T18:40:07.372571593Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=503
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.401695793Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.422126406Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.422236515Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.422321541Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.422361322Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.422639068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.422789450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.422972847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.423019898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.423058754Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.423095966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.423347616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.423742212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.426057757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.426146811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.426320538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.426365803Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.426758955Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.426843169Z" level=info msg="metadata content store policy set" policy=shared
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.429291402Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.429483607Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.429605800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.429654164Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.429741411Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.429829919Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430356065Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430512607Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430567972Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430622513Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430663285Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430767937Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430807681Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430859064Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430898810Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430935412Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.430972741Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431008451Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431068954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431111354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431161409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431205826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431247402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431284851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431327375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431368958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431406507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431449434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431489267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431526102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431562767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431601355Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431654795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431691360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431789483Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.431901675Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.432154612Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.432208821Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.432252995Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.432288307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.432329481Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.432374413Z" level=info msg="NRI interface is disabled by configuration."
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.432829861Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.433059756Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.433155391Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 08 18:40:07 ha-046009 dockerd[503]: time="2025-04-08T18:40:07.433292398Z" level=info msg="containerd successfully booted in 0.033410s"
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.399830353Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.446970347Z" level=info msg="Loading containers: start."
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.770755118Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.852267248Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.921085442Z" level=info msg="Loading containers: done."
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.933552304Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.933588936Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.933613090Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.933826594Z" level=info msg="Daemon has completed initialization"
Apr 08 18:40:08 ha-046009 systemd[1]: Started Docker Application Container Engine.
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.960191907Z" level=info msg="API listen on /var/run/docker.sock"
Apr 08 18:40:08 ha-046009 dockerd[496]: time="2025-04-08T18:40:08.960240047Z" level=info msg="API listen on [::]:2376"
Apr 08 18:40:10 ha-046009 dockerd[496]: time="2025-04-08T18:40:10.330092363Z" level=info msg="Processing signal 'terminated'"
Apr 08 18:40:10 ha-046009 dockerd[496]: time="2025-04-08T18:40:10.331239176Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 08 18:40:10 ha-046009 dockerd[496]: time="2025-04-08T18:40:10.331767733Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 08 18:40:10 ha-046009 dockerd[496]: time="2025-04-08T18:40:10.332035996Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 08 18:40:10 ha-046009 systemd[1]: Stopping Docker Application Container Engine...
Apr 08 18:40:10 ha-046009 dockerd[496]: time="2025-04-08T18:40:10.332195099Z" level=info msg="Daemon shutdown complete"
Apr 08 18:40:11 ha-046009 systemd[1]: docker.service: Deactivated successfully.
Apr 08 18:40:11 ha-046009 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 18:40:11 ha-046009 systemd[1]: Starting Docker Application Container Engine...
Apr 08 18:40:11 ha-046009 dockerd[1112]: time="2025-04-08T18:40:11.369522811Z" level=info msg="Starting up"
Apr 08 18:41:11 ha-046009 dockerd[1112]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 18:41:11 ha-046009 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 18:41:11 ha-046009 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 18:41:11 ha-046009 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
W0408 18:41:11.425225 560229 out.go:270] *
*
W0408 18:41:11.426206 560229 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0408 18:41:11.427710 560229 out.go:201]
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-046009 -v=7 --alsologtostderr" : exit status 90
ha_test.go:474: (dbg) Run: out/minikube-linux-amd64 node list -p ha-046009
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p ha-046009 -n ha-046009
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-046009 -n ha-046009: exit status 6 (219.842305ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E0408 18:41:11.709819 560628 status.go:458] kubeconfig endpoint: get endpoint: "ha-046009" does not appear in /home/jenkins/minikube-integration/20604-538981/kubeconfig
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-046009" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (126.50s)