=== RUN TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run: out/minikube-linux-amd64 start -p multinode-185794 --wait=true -v=8 --alsologtostderr --driver=kvm2
E0414 14:25:32.139132 659249 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/functional-625084/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-185794 --wait=true -v=8 --alsologtostderr --driver=kvm2 : exit status 90 (1m23.760847776s)
-- stdout --
* [multinode-185794] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=20512
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20512-652075/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-652075/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
* Starting "multinode-185794" primary control-plane node in "multinode-185794" cluster
* Restarting existing kvm2 VM for "multinode-185794" ...
-- /stdout --
** stderr **
I0414 14:24:32.296320 685943 out.go:345] Setting OutFile to fd 1 ...
I0414 14:24:32.296568 685943 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:24:32.296577 685943 out.go:358] Setting ErrFile to fd 2...
I0414 14:24:32.296581 685943 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 14:24:32.296752 685943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20512-652075/.minikube/bin
I0414 14:24:32.297278 685943 out.go:352] Setting JSON to false
I0414 14:24:32.298228 685943 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":29223,"bootTime":1744611449,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0414 14:24:32.298290 685943 start.go:139] virtualization: kvm guest
I0414 14:24:32.300170 685943 out.go:177] * [multinode-185794] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0414 14:24:32.301448 685943 out.go:177] - MINIKUBE_LOCATION=20512
I0414 14:24:32.301444 685943 notify.go:220] Checking for updates...
I0414 14:24:32.303828 685943 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0414 14:24:32.305036 685943 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20512-652075/kubeconfig
I0414 14:24:32.306127 685943 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20512-652075/.minikube
I0414 14:24:32.307098 685943 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0414 14:24:32.308091 685943 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0414 14:24:32.309479 685943 config.go:182] Loaded profile config "multinode-185794": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0414 14:24:32.309843 685943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0414 14:24:32.309901 685943 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:24:32.326380 685943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
I0414 14:24:32.326970 685943 main.go:141] libmachine: () Calling .GetVersion
I0414 14:24:32.327634 685943 main.go:141] libmachine: Using API Version 1
I0414 14:24:32.327672 685943 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:24:32.328047 685943 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:24:32.328246 685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
I0414 14:24:32.328480 685943 driver.go:394] Setting default libvirt URI to qemu:///system
I0414 14:24:32.328816 685943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0414 14:24:32.328871 685943 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:24:32.344288 685943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
I0414 14:24:32.344777 685943 main.go:141] libmachine: () Calling .GetVersion
I0414 14:24:32.345337 685943 main.go:141] libmachine: Using API Version 1
I0414 14:24:32.345360 685943 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:24:32.345670 685943 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:24:32.345839 685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
I0414 14:24:32.382663 685943 out.go:177] * Using the kvm2 driver based on existing profile
I0414 14:24:32.383858 685943 start.go:297] selected driver: kvm2
I0414 14:24:32.383877 685943 start.go:901] validating driver "kvm2" against &{Name:multinode-185794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNam
e:multinode-185794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.75 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:f
alse metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0414 14:24:32.384007 685943 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0414 14:24:32.384350 685943 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 14:24:32.384424 685943 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20512-652075/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0414 14:24:32.400421 685943 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0414 14:24:32.401202 685943 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0414 14:24:32.401245 685943 cni.go:84] Creating CNI manager for ""
I0414 14:24:32.401287 685943 cni.go:136] multinode detected (2 nodes found), recommending kindnet
I0414 14:24:32.401365 685943 start.go:340] cluster config:
{Name:multinode-185794 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-185794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.75 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driv
er-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0414 14:24:32.401501 685943 iso.go:125] acquiring lock: {Name:mk31812832bbbb744b9a661285e7c7972432ea16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 14:24:32.404099 685943 out.go:177] * Starting "multinode-185794" primary control-plane node in "multinode-185794" cluster
I0414 14:24:32.405297 685943 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0414 14:24:32.405339 685943 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20512-652075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
I0414 14:24:32.405349 685943 cache.go:56] Caching tarball of preloaded images
I0414 14:24:32.405465 685943 preload.go:172] Found /home/jenkins/minikube-integration/20512-652075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0414 14:24:32.405479 685943 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
I0414 14:24:32.405618 685943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/multinode-185794/config.json ...
I0414 14:24:32.405812 685943 start.go:360] acquireMachinesLock for multinode-185794: {Name:mk9c6cfa0e29a56fc46c94c59cf5ffe9bb360df2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0414 14:24:32.405865 685943 start.go:364] duration metric: took 31.854µs to acquireMachinesLock for "multinode-185794"
I0414 14:24:32.405879 685943 start.go:96] Skipping create...Using existing machine configuration
I0414 14:24:32.405887 685943 fix.go:54] fixHost starting:
I0414 14:24:32.406166 685943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0414 14:24:32.406200 685943 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 14:24:32.421699 685943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42309
I0414 14:24:32.422149 685943 main.go:141] libmachine: () Calling .GetVersion
I0414 14:24:32.422566 685943 main.go:141] libmachine: Using API Version 1
I0414 14:24:32.422587 685943 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 14:24:32.422933 685943 main.go:141] libmachine: () Calling .GetMachineName
I0414 14:24:32.423131 685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
I0414 14:24:32.423315 685943 main.go:141] libmachine: (multinode-185794) Calling .GetState
I0414 14:24:32.425030 685943 fix.go:112] recreateIfNeeded on multinode-185794: state=Stopped err=<nil>
I0414 14:24:32.425056 685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
W0414 14:24:32.425207 685943 fix.go:138] unexpected machine state, will restart: <nil>
I0414 14:24:32.426831 685943 out.go:177] * Restarting existing kvm2 VM for "multinode-185794" ...
I0414 14:24:32.427917 685943 main.go:141] libmachine: (multinode-185794) Calling .Start
I0414 14:24:32.428094 685943 main.go:141] libmachine: (multinode-185794) starting domain...
I0414 14:24:32.428118 685943 main.go:141] libmachine: (multinode-185794) ensuring networks are active...
I0414 14:24:32.428870 685943 main.go:141] libmachine: (multinode-185794) Ensuring network default is active
I0414 14:24:32.429194 685943 main.go:141] libmachine: (multinode-185794) Ensuring network mk-multinode-185794 is active
I0414 14:24:32.429598 685943 main.go:141] libmachine: (multinode-185794) getting domain XML...
I0414 14:24:32.430295 685943 main.go:141] libmachine: (multinode-185794) creating domain...
I0414 14:24:33.656465 685943 main.go:141] libmachine: (multinode-185794) waiting for IP...
I0414 14:24:33.657371 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:33.657691 685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
I0414 14:24:33.657804 685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:33.657702 685980 retry.go:31] will retry after 232.613535ms: waiting for domain to come up
I0414 14:24:33.892514 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:33.893012 685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
I0414 14:24:33.893039 685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:33.892989 685980 retry.go:31] will retry after 383.114871ms: waiting for domain to come up
I0414 14:24:34.277559 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:34.278009 685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
I0414 14:24:34.278085 685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:34.277977 685980 retry.go:31] will retry after 433.749538ms: waiting for domain to come up
I0414 14:24:34.713608 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:34.714052 685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
I0414 14:24:34.714069 685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:34.714019 685980 retry.go:31] will retry after 472.018858ms: waiting for domain to come up
I0414 14:24:35.187735 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:35.188126 685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
I0414 14:24:35.188158 685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:35.188060 685980 retry.go:31] will retry after 673.400984ms: waiting for domain to come up
I0414 14:24:35.862738 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:35.863227 685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
I0414 14:24:35.863247 685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:35.863193 685980 retry.go:31] will retry after 923.336117ms: waiting for domain to come up
I0414 14:24:36.788282 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:36.788659 685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
I0414 14:24:36.788689 685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:36.788600 685980 retry.go:31] will retry after 1.136758576s: waiting for domain to come up
I0414 14:24:37.926786 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:37.927246 685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
I0414 14:24:37.927271 685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:37.927183 685980 retry.go:31] will retry after 1.19877191s: waiting for domain to come up
I0414 14:24:39.127736 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:39.128151 685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
I0414 14:24:39.128176 685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:39.128121 685980 retry.go:31] will retry after 1.846405888s: waiting for domain to come up
I0414 14:24:40.976570 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:40.977031 685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
I0414 14:24:40.977065 685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:40.976979 685980 retry.go:31] will retry after 1.553555796s: waiting for domain to come up
I0414 14:24:42.531874 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:42.532401 685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
I0414 14:24:42.532478 685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:42.532395 685980 retry.go:31] will retry after 1.941296316s: waiting for domain to come up
I0414 14:24:44.476430 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:44.476906 685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
I0414 14:24:44.476972 685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:44.476872 685980 retry.go:31] will retry after 3.039598021s: waiting for domain to come up
I0414 14:24:47.518016 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:47.518473 685943 main.go:141] libmachine: (multinode-185794) DBG | unable to find current IP address of domain multinode-185794 in network mk-multinode-185794
I0414 14:24:47.518498 685943 main.go:141] libmachine: (multinode-185794) DBG | I0414 14:24:47.518433 685980 retry.go:31] will retry after 3.265785149s: waiting for domain to come up
I0414 14:24:50.788059 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:50.788450 685943 main.go:141] libmachine: (multinode-185794) found domain IP: 192.168.39.164
I0414 14:24:50.788479 685943 main.go:141] libmachine: (multinode-185794) reserving static IP address...
I0414 14:24:50.788512 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has current primary IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:50.788971 685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "multinode-185794", mac: "52:54:00:92:f4:1e", ip: "192.168.39.164"} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
I0414 14:24:50.788997 685943 main.go:141] libmachine: (multinode-185794) DBG | skip adding static IP to network mk-multinode-185794 - found existing host DHCP lease matching {name: "multinode-185794", mac: "52:54:00:92:f4:1e", ip: "192.168.39.164"}
I0414 14:24:50.789012 685943 main.go:141] libmachine: (multinode-185794) reserved static IP address 192.168.39.164 for domain multinode-185794
I0414 14:24:50.789029 685943 main.go:141] libmachine: (multinode-185794) waiting for SSH...
I0414 14:24:50.789046 685943 main.go:141] libmachine: (multinode-185794) DBG | Getting to WaitForSSH function...
I0414 14:24:50.791630 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:50.792029 685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
I0414 14:24:50.792073 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:50.792183 685943 main.go:141] libmachine: (multinode-185794) DBG | Using SSH client type: external
I0414 14:24:50.792208 685943 main.go:141] libmachine: (multinode-185794) DBG | Using SSH private key: /home/jenkins/minikube-integration/20512-652075/.minikube/machines/multinode-185794/id_rsa (-rw-------)
I0414 14:24:50.792249 685943 main.go:141] libmachine: (multinode-185794) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20512-652075/.minikube/machines/multinode-185794/id_rsa -p 22] /usr/bin/ssh <nil>}
I0414 14:24:50.792261 685943 main.go:141] libmachine: (multinode-185794) DBG | About to run SSH command:
I0414 14:24:50.792315 685943 main.go:141] libmachine: (multinode-185794) DBG | exit 0
I0414 14:24:50.915547 685943 main.go:141] libmachine: (multinode-185794) DBG | SSH cmd err, output: <nil>:
I0414 14:24:50.915936 685943 main.go:141] libmachine: (multinode-185794) Calling .GetConfigRaw
I0414 14:24:50.916601 685943 main.go:141] libmachine: (multinode-185794) Calling .GetIP
I0414 14:24:50.919416 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:50.919758 685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
I0414 14:24:50.919785 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:50.920148 685943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20512-652075/.minikube/profiles/multinode-185794/config.json ...
I0414 14:24:50.920391 685943 machine.go:93] provisionDockerMachine start ...
I0414 14:24:50.920414 685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
I0414 14:24:50.920631 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
I0414 14:24:50.923251 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:50.923678 685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
I0414 14:24:50.923721 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:50.923849 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
I0414 14:24:50.924019 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
I0414 14:24:50.924209 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
I0414 14:24:50.924339 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
I0414 14:24:50.924518 685943 main.go:141] libmachine: Using SSH client type: native
I0414 14:24:50.924780 685943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.164 22 <nil> <nil>}
I0414 14:24:50.924795 685943 main.go:141] libmachine: About to run SSH command:
hostname
I0414 14:24:51.031525 685943 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0414 14:24:51.031572 685943 main.go:141] libmachine: (multinode-185794) Calling .GetMachineName
I0414 14:24:51.031908 685943 buildroot.go:166] provisioning hostname "multinode-185794"
I0414 14:24:51.031937 685943 main.go:141] libmachine: (multinode-185794) Calling .GetMachineName
I0414 14:24:51.032164 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
I0414 14:24:51.035070 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:51.035478 685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
I0414 14:24:51.035519 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:51.035622 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
I0414 14:24:51.035831 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
I0414 14:24:51.035965 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
I0414 14:24:51.036139 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
I0414 14:24:51.036371 685943 main.go:141] libmachine: Using SSH client type: native
I0414 14:24:51.036577 685943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.164 22 <nil> <nil>}
I0414 14:24:51.036590 685943 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-185794 && echo "multinode-185794" | sudo tee /etc/hostname
I0414 14:24:51.152485 685943 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-185794
I0414 14:24:51.152514 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
I0414 14:24:51.155422 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:51.155801 685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
I0414 14:24:51.155840 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:51.156078 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
I0414 14:24:51.156291 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
I0414 14:24:51.156471 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
I0414 14:24:51.156599 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
I0414 14:24:51.156754 685943 main.go:141] libmachine: Using SSH client type: native
I0414 14:24:51.156973 685943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.164 22 <nil> <nil>}
I0414 14:24:51.156990 685943 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\smultinode-185794' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-185794/g' /etc/hosts;
else
echo '127.0.1.1 multinode-185794' | sudo tee -a /etc/hosts;
fi
fi
I0414 14:24:51.267669 685943 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0414 14:24:51.267706 685943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20512-652075/.minikube CaCertPath:/home/jenkins/minikube-integration/20512-652075/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20512-652075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20512-652075/.minikube}
I0414 14:24:51.267731 685943 buildroot.go:174] setting up certificates
I0414 14:24:51.267747 685943 provision.go:84] configureAuth start
I0414 14:24:51.267771 685943 main.go:141] libmachine: (multinode-185794) Calling .GetMachineName
I0414 14:24:51.268111 685943 main.go:141] libmachine: (multinode-185794) Calling .GetIP
I0414 14:24:51.271330 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:51.271745 685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
I0414 14:24:51.271782 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:51.271968 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
I0414 14:24:51.274700 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:51.275012 685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
I0414 14:24:51.275042 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:51.275160 685943 provision.go:143] copyHostCerts
I0414 14:24:51.275190 685943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-652075/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20512-652075/.minikube/ca.pem
I0414 14:24:51.275225 685943 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-652075/.minikube/ca.pem, removing ...
I0414 14:24:51.275236 685943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-652075/.minikube/ca.pem
I0414 14:24:51.275356 685943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-652075/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20512-652075/.minikube/ca.pem (1078 bytes)
I0414 14:24:51.275449 685943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-652075/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20512-652075/.minikube/cert.pem
I0414 14:24:51.275470 685943 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-652075/.minikube/cert.pem, removing ...
I0414 14:24:51.275478 685943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-652075/.minikube/cert.pem
I0414 14:24:51.275507 685943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-652075/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20512-652075/.minikube/cert.pem (1123 bytes)
I0414 14:24:51.275556 685943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-652075/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20512-652075/.minikube/key.pem
I0414 14:24:51.275574 685943 exec_runner.go:144] found /home/jenkins/minikube-integration/20512-652075/.minikube/key.pem, removing ...
I0414 14:24:51.275582 685943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20512-652075/.minikube/key.pem
I0414 14:24:51.275605 685943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20512-652075/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20512-652075/.minikube/key.pem (1675 bytes)
I0414 14:24:51.275658 685943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20512-652075/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20512-652075/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20512-652075/.minikube/certs/ca-key.pem org=jenkins.multinode-185794 san=[127.0.0.1 192.168.39.164 localhost minikube multinode-185794]
I0414 14:24:51.480600 685943 provision.go:177] copyRemoteCerts
I0414 14:24:51.480682 685943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0414 14:24:51.480712 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
I0414 14:24:51.483468 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:51.483801 685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
I0414 14:24:51.483828 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:51.484017 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
I0414 14:24:51.484211 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
I0414 14:24:51.484382 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
I0414 14:24:51.484518 685943 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/multinode-185794/id_rsa Username:docker}
I0414 14:24:51.564851 685943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-652075/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0414 14:24:51.564932 685943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-652075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0414 14:24:51.587348 685943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-652075/.minikube/machines/server.pem -> /etc/docker/server.pem
I0414 14:24:51.587446 685943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-652075/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
I0414 14:24:51.609482 685943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-652075/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0414 14:24:51.609548 685943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-652075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0414 14:24:51.631179 685943 provision.go:87] duration metric: took 363.416349ms to configureAuth
I0414 14:24:51.631208 685943 buildroot.go:189] setting minikube options for container-runtime
I0414 14:24:51.631422 685943 config.go:182] Loaded profile config "multinode-185794": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0414 14:24:51.631448 685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
I0414 14:24:51.631739 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
I0414 14:24:51.634356 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:51.634812 685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
I0414 14:24:51.634846 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:51.634941 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
I0414 14:24:51.635152 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
I0414 14:24:51.635338 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
I0414 14:24:51.635481 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
I0414 14:24:51.635618 685943 main.go:141] libmachine: Using SSH client type: native
I0414 14:24:51.635833 685943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.164 22 <nil> <nil>}
I0414 14:24:51.635846 685943 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0414 14:24:51.740502 685943 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0414 14:24:51.740532 685943 buildroot.go:70] root file system type: tmpfs
I0414 14:24:51.740634 685943 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0414 14:24:51.740661 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
I0414 14:24:51.743433 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:51.743804 685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
I0414 14:24:51.743853 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:51.744009 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
I0414 14:24:51.744225 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
I0414 14:24:51.744392 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
I0414 14:24:51.744539 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
I0414 14:24:51.744700 685943 main.go:141] libmachine: Using SSH client type: native
I0414 14:24:51.744970 685943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.164 22 <nil> <nil>}
I0414 14:24:51.745066 685943 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0414 14:24:51.860251 685943 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0414 14:24:51.860292 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
I0414 14:24:51.863152 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:51.863557 685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
I0414 14:24:51.863592 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:51.863820 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
I0414 14:24:51.864063 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
I0414 14:24:51.864218 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
I0414 14:24:51.864390 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
I0414 14:24:51.864574 685943 main.go:141] libmachine: Using SSH client type: native
I0414 14:24:51.864782 685943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.164 22 <nil> <nil>}
I0414 14:24:51.864799 685943 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0414 14:24:53.726736 685943 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0414 14:24:53.726768 685943 machine.go:96] duration metric: took 2.806361695s to provisionDockerMachine
I0414 14:24:53.726780 685943 start.go:293] postStartSetup for "multinode-185794" (driver="kvm2")
I0414 14:24:53.726791 685943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0414 14:24:53.726817 685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
I0414 14:24:53.727195 685943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0414 14:24:53.727242 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
I0414 14:24:53.730246 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:53.730651 685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
I0414 14:24:53.730678 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:53.730844 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
I0414 14:24:53.731042 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
I0414 14:24:53.731227 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
I0414 14:24:53.731382 685943 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/multinode-185794/id_rsa Username:docker}
I0414 14:24:53.814301 685943 ssh_runner.go:195] Run: cat /etc/os-release
I0414 14:24:53.818382 685943 command_runner.go:130] > NAME=Buildroot
I0414 14:24:53.818411 685943 command_runner.go:130] > VERSION=2023.02.9-dirty
I0414 14:24:53.818418 685943 command_runner.go:130] > ID=buildroot
I0414 14:24:53.818426 685943 command_runner.go:130] > VERSION_ID=2023.02.9
I0414 14:24:53.818434 685943 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
I0414 14:24:53.818509 685943 info.go:137] Remote host: Buildroot 2023.02.9
I0414 14:24:53.818527 685943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-652075/.minikube/addons for local assets ...
I0414 14:24:53.818601 685943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20512-652075/.minikube/files for local assets ...
I0414 14:24:53.818675 685943 filesync.go:149] local asset: /home/jenkins/minikube-integration/20512-652075/.minikube/files/etc/ssl/certs/6592492.pem -> 6592492.pem in /etc/ssl/certs
I0414 14:24:53.818684 685943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20512-652075/.minikube/files/etc/ssl/certs/6592492.pem -> /etc/ssl/certs/6592492.pem
I0414 14:24:53.818765 685943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0414 14:24:53.827986 685943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20512-652075/.minikube/files/etc/ssl/certs/6592492.pem --> /etc/ssl/certs/6592492.pem (1708 bytes)
I0414 14:24:53.850420 685943 start.go:296] duration metric: took 123.619928ms for postStartSetup
I0414 14:24:53.850488 685943 fix.go:56] duration metric: took 21.444598561s for fixHost
I0414 14:24:53.850522 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
I0414 14:24:53.853457 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:53.853879 685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
I0414 14:24:53.853918 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:53.854053 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
I0414 14:24:53.854288 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
I0414 14:24:53.854448 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
I0414 14:24:53.854596 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
I0414 14:24:53.854751 685943 main.go:141] libmachine: Using SSH client type: native
I0414 14:24:53.854988 685943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.164 22 <nil> <nil>}
I0414 14:24:53.854999 685943 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0414 14:24:53.960116 685943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744640693.921110595
I0414 14:24:53.960155 685943 fix.go:216] guest clock: 1744640693.921110595
I0414 14:24:53.960166 685943 fix.go:229] Guest: 2025-04-14 14:24:53.921110595 +0000 UTC Remote: 2025-04-14 14:24:53.850494945 +0000 UTC m=+21.591876680 (delta=70.61565ms)
I0414 14:24:53.960223 685943 fix.go:200] guest clock delta is within tolerance: 70.61565ms
I0414 14:24:53.960233 685943 start.go:83] releasing machines lock for "multinode-185794", held for 21.554358718s
I0414 14:24:53.960260 685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
I0414 14:24:53.960576 685943 main.go:141] libmachine: (multinode-185794) Calling .GetIP
I0414 14:24:53.963358 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:53.963796 685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
I0414 14:24:53.963821 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:53.964011 685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
I0414 14:24:53.964526 685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
I0414 14:24:53.964692 685943 main.go:141] libmachine: (multinode-185794) Calling .DriverName
I0414 14:24:53.964805 685943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0414 14:24:53.964882 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
I0414 14:24:53.964889 685943 ssh_runner.go:195] Run: cat /version.json
I0414 14:24:53.964911 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHHostname
I0414 14:24:53.967546 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:53.967682 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:53.967905 685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
I0414 14:24:53.967931 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:53.968028 685943 main.go:141] libmachine: (multinode-185794) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:f4:1e", ip: ""} in network mk-multinode-185794: {Iface:virbr1 ExpiryTime:2025-04-14 15:24:43 +0000 UTC Type:0 Mac:52:54:00:92:f4:1e Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-185794 Clientid:01:52:54:00:92:f4:1e}
I0414 14:24:53.968067 685943 main.go:141] libmachine: (multinode-185794) DBG | domain multinode-185794 has defined IP address 192.168.39.164 and MAC address 52:54:00:92:f4:1e in network mk-multinode-185794
I0414 14:24:53.968104 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
I0414 14:24:53.968283 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHPort
I0414 14:24:53.968292 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
I0414 14:24:53.968448 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
I0414 14:24:53.968451 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHKeyPath
I0414 14:24:53.968599 685943 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/multinode-185794/id_rsa Username:docker}
I0414 14:24:53.968645 685943 main.go:141] libmachine: (multinode-185794) Calling .GetSSHUsername
I0414 14:24:53.968784 685943 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20512-652075/.minikube/machines/multinode-185794/id_rsa Username:docker}
I0414 14:24:54.044486 685943 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
I0414 14:24:54.045290 685943 ssh_runner.go:195] Run: systemctl --version
I0414 14:24:54.067527 685943 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
I0414 14:24:54.067660 685943 command_runner.go:130] > systemd 252 (252)
I0414 14:24:54.067702 685943 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
I0414 14:24:54.067796 685943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0414 14:24:54.073135 685943 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
W0414 14:24:54.073335 685943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0414 14:24:54.073414 685943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0414 14:24:54.088465 685943 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist,
I0414 14:24:54.088511 685943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0414 14:24:54.088555 685943 start.go:495] detecting cgroup driver to use...
I0414 14:24:54.088703 685943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0414 14:24:54.105918 685943 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
I0414 14:24:54.106255 685943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0414 14:24:54.116403 685943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0414 14:24:54.127493 685943 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0414 14:24:54.127565 685943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0414 14:24:54.137989 685943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 14:24:54.148233 685943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0414 14:24:54.158712 685943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 14:24:54.168791 685943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0414 14:24:54.178838 685943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0414 14:24:54.188729 685943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0414 14:24:54.198911 685943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0414 14:24:54.208904 685943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0414 14:24:54.217745 685943 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0414 14:24:54.217831 685943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0414 14:24:54.217881 685943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0414 14:24:54.227791 685943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0414 14:24:54.236800 685943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 14:24:54.346893 685943 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0414 14:24:54.372285 685943 start.go:495] detecting cgroup driver to use...
I0414 14:24:54.372400 685943 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0414 14:24:54.397396 685943 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
I0414 14:24:54.397426 685943 command_runner.go:130] > [Unit]
I0414 14:24:54.397433 685943 command_runner.go:130] > Description=Docker Application Container Engine
I0414 14:24:54.397439 685943 command_runner.go:130] > Documentation=https://docs.docker.com
I0414 14:24:54.397448 685943 command_runner.go:130] > After=network.target minikube-automount.service docker.socket
I0414 14:24:54.397456 685943 command_runner.go:130] > Requires= minikube-automount.service docker.socket
I0414 14:24:54.397463 685943 command_runner.go:130] > StartLimitBurst=3
I0414 14:24:54.397470 685943 command_runner.go:130] > StartLimitIntervalSec=60
I0414 14:24:54.397476 685943 command_runner.go:130] > [Service]
I0414 14:24:54.397482 685943 command_runner.go:130] > Type=notify
I0414 14:24:54.397487 685943 command_runner.go:130] > Restart=on-failure
I0414 14:24:54.397496 685943 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I0414 14:24:54.397510 685943 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I0414 14:24:54.397517 685943 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
I0414 14:24:54.397526 685943 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
I0414 14:24:54.397536 685943 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I0414 14:24:54.397547 685943 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
I0414 14:24:54.397559 685943 command_runner.go:130] > # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I0414 14:24:54.397574 685943 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I0414 14:24:54.397584 685943 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I0414 14:24:54.397594 685943 command_runner.go:130] > ExecStart=
I0414 14:24:54.397608 685943 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
I0414 14:24:54.397617 685943 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
I0414 14:24:54.397625 685943 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I0414 14:24:54.397631 685943 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
I0414 14:24:54.397635 685943 command_runner.go:130] > LimitNOFILE=infinity
I0414 14:24:54.397639 685943 command_runner.go:130] > LimitNPROC=infinity
I0414 14:24:54.397643 685943 command_runner.go:130] > LimitCORE=infinity
I0414 14:24:54.397648 685943 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
I0414 14:24:54.397656 685943 command_runner.go:130] > # Only systemd 226 and above support this version.
I0414 14:24:54.397667 685943 command_runner.go:130] > TasksMax=infinity
I0414 14:24:54.397677 685943 command_runner.go:130] > TimeoutStartSec=0
I0414 14:24:54.397684 685943 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I0414 14:24:54.397688 685943 command_runner.go:130] > Delegate=yes
I0414 14:24:54.397694 685943 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
I0414 14:24:54.397700 685943 command_runner.go:130] > KillMode=process
I0414 14:24:54.397703 685943 command_runner.go:130] > [Install]
I0414 14:24:54.397714 685943 command_runner.go:130] > WantedBy=multi-user.target
I0414 14:24:54.397782 685943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0414 14:24:54.414252 685943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0414 14:24:54.440014 685943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0414 14:24:54.453901 685943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0414 14:24:54.467888 685943 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0414 14:24:54.494033 685943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0414 14:24:54.508340 685943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0414 14:24:54.526606 685943 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
I0414 14:24:54.526881 685943 ssh_runner.go:195] Run: which cri-dockerd
I0414 14:24:54.530695 685943 command_runner.go:130] > /usr/bin/cri-dockerd
I0414 14:24:54.530849 685943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0414 14:24:54.540271 685943 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0414 14:24:54.556266 685943 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0414 14:24:54.666442 685943 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0414 14:24:54.776234 685943 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0414 14:24:54.776400 685943 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0414 14:24:54.793573 685943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 14:24:54.907543 685943 ssh_runner.go:195] Run: sudo systemctl restart docker
I0414 14:25:55.981608 685943 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
I0414 14:25:55.981641 685943 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
I0414 14:25:55.982289 685943 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.074682565s)
I0414 14:25:55.982387 685943 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0414 14:25:55.994919 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 systemd[1]: Starting Docker Application Container Engine...
I0414 14:25:55.994961 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[491]: time="2025-04-14T14:24:52.197958776Z" level=info msg="Starting up"
I0414 14:25:55.994988 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[491]: time="2025-04-14T14:24:52.198781505Z" level=info msg="containerd not running, starting managed containerd"
I0414 14:25:55.995008 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[491]: time="2025-04-14T14:24:52.199605247Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=498
I0414 14:25:55.995028 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.226444569Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
I0414 14:25:55.995047 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.245941128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
I0414 14:25:55.995065 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246073498Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
I0414 14:25:55.995079 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246159942Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
I0414 14:25:55.995096 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246206873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
I0414 14:25:55.995116 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246518954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
I0414 14:25:55.995134 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246640978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
I0414 14:25:55.995170 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246855158Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
I0414 14:25:55.995191 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246902606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
I0414 14:25:55.995212 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246941808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
I0414 14:25:55.995228 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246977274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
I0414 14:25:55.995247 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.247198205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
I0414 14:25:55.995267 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.247528452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
I0414 14:25:55.995311 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250227978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
I0414 14:25:55.995332 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250294640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
I0414 14:25:55.995387 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250472406Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
I0414 14:25:55.995409 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250517948Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
I0414 14:25:55.995426 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250822546Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
I0414 14:25:55.995443 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250891126Z" level=info msg="metadata content store policy set" policy=shared
I0414 14:25:55.995460 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252339266Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
I0414 14:25:55.995478 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252452361Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
I0414 14:25:55.995496 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252499682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
I0414 14:25:55.995516 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252587729Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
I0414 14:25:55.995532 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252633684Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
I0414 14:25:55.995551 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252726102Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
I0414 14:25:55.995570 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253034215Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
I0414 14:25:55.995588 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253155097Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
I0414 14:25:55.995608 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253199587Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
I0414 14:25:55.995626 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253243435Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
I0414 14:25:55.995650 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253281902Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
I0414 14:25:55.995673 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253327396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
I0414 14:25:55.995696 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253364887Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
I0414 14:25:55.995716 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253462959Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
I0414 14:25:55.995736 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253609526Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
I0414 14:25:55.995759 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253650827Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
I0414 14:25:55.995779 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253738201Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
I0414 14:25:55.995832 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253817076Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
I0414 14:25:55.995851 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253923991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
I0414 14:25:55.995869 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254006418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
I0414 14:25:55.995888 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254044560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
I0414 14:25:55.995904 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254132419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
I0414 14:25:55.995923 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254174123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
I0414 14:25:55.995941 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254257107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
I0414 14:25:55.995960 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254334894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
I0414 14:25:55.995979 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254427982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
I0414 14:25:55.995997 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254467066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
I0414 14:25:55.996014 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254578827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
I0414 14:25:55.996032 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254669466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
I0414 14:25:55.996060 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254707212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
I0414 14:25:55.996078 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254788877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
I0414 14:25:55.996097 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254876725Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
I0414 14:25:55.996115 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254977464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
I0414 14:25:55.996133 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255064474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
I0414 14:25:55.996151 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255106276Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
I0414 14:25:55.996171 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255285853Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
I0414 14:25:55.996195 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255390332Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
I0414 14:25:55.996214 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255474877Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
I0414 14:25:55.996237 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255517504Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
I0414 14:25:55.996258 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255607339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
I0414 14:25:55.996276 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255715503Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
I0414 14:25:55.996290 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255802263Z" level=info msg="NRI interface is disabled by configuration."
I0414 14:25:55.996319 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256253750Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
I0414 14:25:55.996335 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256387496Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
I0414 14:25:55.996352 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256524253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
I0414 14:25:55.996369 685943 command_runner.go:130] > Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256620219Z" level=info msg="containerd successfully booted in 0.031733s"
I0414 14:25:55.996393 685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.227523866Z" level=info msg="[graphdriver] trying configured driver: overlay2"
I0414 14:25:55.996409 685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.284469372Z" level=info msg="Loading containers: start."
I0414 14:25:55.996447 685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.495795420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
I0414 14:25:55.996470 685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.574685843Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
I0414 14:25:55.996486 685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.638462870Z" level=info msg="Loading containers: done."
I0414 14:25:55.996507 685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.655995821Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
I0414 14:25:55.996522 685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.656090385Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
I0414 14:25:55.996544 685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.656144269Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
I0414 14:25:55.996557 685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.656591352Z" level=info msg="Daemon has completed initialization"
I0414 14:25:55.996570 685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.685531817Z" level=info msg="API listen on [::]:2376"
I0414 14:25:55.996584 685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.685586150Z" level=info msg="API listen on /var/run/docker.sock"
I0414 14:25:55.996598 685943 command_runner.go:130] > Apr 14 14:24:53 multinode-185794 systemd[1]: Started Docker Application Container Engine.
I0414 14:25:55.996615 685943 command_runner.go:130] > Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.883173175Z" level=info msg="Processing signal 'terminated'"
I0414 14:25:55.996630 685943 command_runner.go:130] > Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.884830278Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
I0414 14:25:55.996648 685943 command_runner.go:130] > Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.885203639Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
I0414 14:25:55.996664 685943 command_runner.go:130] > Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.885222714Z" level=info msg="Daemon shutdown complete"
I0414 14:25:55.996690 685943 command_runner.go:130] > Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.885272739Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
I0414 14:25:55.996749 685943 command_runner.go:130] > Apr 14 14:24:54 multinode-185794 systemd[1]: Stopping Docker Application Container Engine...
I0414 14:25:55.996761 685943 command_runner.go:130] > Apr 14 14:24:55 multinode-185794 systemd[1]: docker.service: Deactivated successfully.
I0414 14:25:55.996767 685943 command_runner.go:130] > Apr 14 14:24:55 multinode-185794 systemd[1]: Stopped Docker Application Container Engine.
I0414 14:25:55.996773 685943 command_runner.go:130] > Apr 14 14:24:55 multinode-185794 systemd[1]: Starting Docker Application Container Engine...
I0414 14:25:55.996780 685943 command_runner.go:130] > Apr 14 14:24:55 multinode-185794 dockerd[875]: time="2025-04-14T14:24:55.924289115Z" level=info msg="Starting up"
I0414 14:25:55.996798 685943 command_runner.go:130] > Apr 14 14:25:55 multinode-185794 dockerd[875]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
I0414 14:25:55.996812 685943 command_runner.go:130] > Apr 14 14:25:55 multinode-185794 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
I0414 14:25:55.996822 685943 command_runner.go:130] > Apr 14 14:25:55 multinode-185794 systemd[1]: docker.service: Failed with result 'exit-code'.
I0414 14:25:55.996834 685943 command_runner.go:130] > Apr 14 14:25:55 multinode-185794 systemd[1]: Failed to start Docker Application Container Engine.
I0414 14:25:56.003146 685943 out.go:201]
W0414 14:25:56.004701 685943 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
Apr 14 14:24:52 multinode-185794 systemd[1]: Starting Docker Application Container Engine...
Apr 14 14:24:52 multinode-185794 dockerd[491]: time="2025-04-14T14:24:52.197958776Z" level=info msg="Starting up"
Apr 14 14:24:52 multinode-185794 dockerd[491]: time="2025-04-14T14:24:52.198781505Z" level=info msg="containerd not running, starting managed containerd"
Apr 14 14:24:52 multinode-185794 dockerd[491]: time="2025-04-14T14:24:52.199605247Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=498
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.226444569Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.245941128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246073498Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246159942Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246206873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246518954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246640978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246855158Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246902606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246941808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246977274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.247198205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.247528452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250227978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250294640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250472406Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250517948Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250822546Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250891126Z" level=info msg="metadata content store policy set" policy=shared
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252339266Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252452361Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252499682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252587729Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252633684Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252726102Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253034215Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253155097Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253199587Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253243435Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253281902Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253327396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253364887Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253462959Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253609526Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253650827Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253738201Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253817076Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253923991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254006418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254044560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254132419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254174123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254257107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254334894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254427982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254467066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254578827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254669466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254707212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254788877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254876725Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254977464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255064474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255106276Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255285853Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255390332Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255474877Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255517504Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255607339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255715503Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255802263Z" level=info msg="NRI interface is disabled by configuration."
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256253750Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256387496Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256524253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256620219Z" level=info msg="containerd successfully booted in 0.031733s"
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.227523866Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.284469372Z" level=info msg="Loading containers: start."
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.495795420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.574685843Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.638462870Z" level=info msg="Loading containers: done."
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.655995821Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.656090385Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.656144269Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.656591352Z" level=info msg="Daemon has completed initialization"
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.685531817Z" level=info msg="API listen on [::]:2376"
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.685586150Z" level=info msg="API listen on /var/run/docker.sock"
Apr 14 14:24:53 multinode-185794 systemd[1]: Started Docker Application Container Engine.
Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.883173175Z" level=info msg="Processing signal 'terminated'"
Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.884830278Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.885203639Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.885222714Z" level=info msg="Daemon shutdown complete"
Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.885272739Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 14 14:24:54 multinode-185794 systemd[1]: Stopping Docker Application Container Engine...
Apr 14 14:24:55 multinode-185794 systemd[1]: docker.service: Deactivated successfully.
Apr 14 14:24:55 multinode-185794 systemd[1]: Stopped Docker Application Container Engine.
Apr 14 14:24:55 multinode-185794 systemd[1]: Starting Docker Application Container Engine...
Apr 14 14:24:55 multinode-185794 dockerd[875]: time="2025-04-14T14:24:55.924289115Z" level=info msg="Starting up"
Apr 14 14:25:55 multinode-185794 dockerd[875]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 14 14:25:55 multinode-185794 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 14 14:25:55 multinode-185794 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 14 14:25:55 multinode-185794 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
Apr 14 14:24:52 multinode-185794 systemd[1]: Starting Docker Application Container Engine...
Apr 14 14:24:52 multinode-185794 dockerd[491]: time="2025-04-14T14:24:52.197958776Z" level=info msg="Starting up"
Apr 14 14:24:52 multinode-185794 dockerd[491]: time="2025-04-14T14:24:52.198781505Z" level=info msg="containerd not running, starting managed containerd"
Apr 14 14:24:52 multinode-185794 dockerd[491]: time="2025-04-14T14:24:52.199605247Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=498
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.226444569Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.245941128Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246073498Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246159942Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246206873Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246518954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246640978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246855158Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246902606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246941808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.246977274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.247198205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.247528452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250227978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250294640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250472406Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250517948Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250822546Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.250891126Z" level=info msg="metadata content store policy set" policy=shared
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252339266Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252452361Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252499682Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252587729Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252633684Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.252726102Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253034215Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253155097Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253199587Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253243435Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253281902Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253327396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253364887Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253462959Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253609526Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253650827Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253738201Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253817076Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.253923991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254006418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254044560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254132419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254174123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254257107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254334894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254427982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254467066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254578827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254669466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254707212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254788877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254876725Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.254977464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255064474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255106276Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255285853Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255390332Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255474877Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255517504Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255607339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255715503Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.255802263Z" level=info msg="NRI interface is disabled by configuration."
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256253750Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256387496Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256524253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 14 14:24:52 multinode-185794 dockerd[498]: time="2025-04-14T14:24:52.256620219Z" level=info msg="containerd successfully booted in 0.031733s"
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.227523866Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.284469372Z" level=info msg="Loading containers: start."
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.495795420Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.574685843Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.638462870Z" level=info msg="Loading containers: done."
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.655995821Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.656090385Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.656144269Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.656591352Z" level=info msg="Daemon has completed initialization"
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.685531817Z" level=info msg="API listen on [::]:2376"
Apr 14 14:24:53 multinode-185794 dockerd[491]: time="2025-04-14T14:24:53.685586150Z" level=info msg="API listen on /var/run/docker.sock"
Apr 14 14:24:53 multinode-185794 systemd[1]: Started Docker Application Container Engine.
Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.883173175Z" level=info msg="Processing signal 'terminated'"
Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.884830278Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.885203639Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.885222714Z" level=info msg="Daemon shutdown complete"
Apr 14 14:24:54 multinode-185794 dockerd[491]: time="2025-04-14T14:24:54.885272739Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 14 14:24:54 multinode-185794 systemd[1]: Stopping Docker Application Container Engine...
Apr 14 14:24:55 multinode-185794 systemd[1]: docker.service: Deactivated successfully.
Apr 14 14:24:55 multinode-185794 systemd[1]: Stopped Docker Application Container Engine.
Apr 14 14:24:55 multinode-185794 systemd[1]: Starting Docker Application Container Engine...
Apr 14 14:24:55 multinode-185794 dockerd[875]: time="2025-04-14T14:24:55.924289115Z" level=info msg="Starting up"
Apr 14 14:25:55 multinode-185794 dockerd[875]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 14 14:25:55 multinode-185794 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 14 14:25:55 multinode-185794 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 14 14:25:55 multinode-185794 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
W0414 14:25:56.004765 685943 out.go:270] *
*
W0414 14:25:56.005707 685943 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0414 14:25:56.007352 685943 out.go:201]
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-linux-amd64 start -p multinode-185794 --wait=true -v=8 --alsologtostderr --driver=kvm2 " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-185794 -n multinode-185794
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-185794 -n multinode-185794: exit status 6 (225.352008ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E0414 14:25:56.249285 686345 status.go:458] kubeconfig endpoint: get endpoint: "multinode-185794" does not appear in /home/jenkins/minikube-integration/20512-652075/kubeconfig
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-185794" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/RestartMultiNode (84.01s)