=== RUN TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run: out/minikube-darwin-amd64 start -p ha-342000 --wait=true -v=7 --alsologtostderr --driver=hyperkit
E0716 17:43:23.927516 1685 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19264-1062/.minikube/profiles/functional-780000/client.crt: no such file or directory
E0716 17:44:40.072790 1685 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19264-1062/.minikube/profiles/addons-044000/client.crt: no such file or directory
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-342000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : exit status 90 (1m40.416519352s)
-- stdout --
* [ha-342000] minikube v1.33.1 on Darwin 14.5
- MINIKUBE_LOCATION=19264
- KUBECONFIG=/Users/jenkins/minikube-integration/19264-1062/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19264-1062/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the hyperkit driver based on existing profile
* Starting "ha-342000" primary control-plane node in "ha-342000" cluster
* Restarting existing hyperkit VM for "ha-342000" ...
-- /stdout --
** stderr **
I0716 17:43:03.502401 3711 out.go:291] Setting OutFile to fd 1 ...
I0716 17:43:03.502656 3711 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0716 17:43:03.502662 3711 out.go:304] Setting ErrFile to fd 2...
I0716 17:43:03.502666 3711 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0716 17:43:03.502860 3711 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19264-1062/.minikube/bin
I0716 17:43:03.504332 3711 out.go:298] Setting JSON to false
I0716 17:43:03.527116 3711 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2556,"bootTime":1721174427,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
W0716 17:43:03.527248 3711 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0716 17:43:03.549196 3711 out.go:177] * [ha-342000] minikube v1.33.1 on Darwin 14.5
I0716 17:43:03.591023 3711 notify.go:220] Checking for updates...
I0716 17:43:03.612946 3711 out.go:177] - MINIKUBE_LOCATION=19264
I0716 17:43:03.654939 3711 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/19264-1062/kubeconfig
I0716 17:43:03.676084 3711 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0716 17:43:03.718938 3711 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0716 17:43:03.760759 3711 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19264-1062/.minikube
I0716 17:43:03.803684 3711 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0716 17:43:03.825762 3711 config.go:182] Loaded profile config "ha-342000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0716 17:43:03.826396 3711 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0716 17:43:03.826450 3711 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0716 17:43:03.835865 3711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52050
I0716 17:43:03.836305 3711 main.go:141] libmachine: () Calling .GetVersion
I0716 17:43:03.836757 3711 main.go:141] libmachine: Using API Version 1
I0716 17:43:03.836771 3711 main.go:141] libmachine: () Calling .SetConfigRaw
I0716 17:43:03.837009 3711 main.go:141] libmachine: () Calling .GetMachineName
I0716 17:43:03.837123 3711 main.go:141] libmachine: (ha-342000) Calling .DriverName
I0716 17:43:03.837321 3711 driver.go:392] Setting default libvirt URI to qemu:///system
I0716 17:43:03.837598 3711 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0716 17:43:03.837619 3711 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0716 17:43:03.846372 3711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52052
I0716 17:43:03.846746 3711 main.go:141] libmachine: () Calling .GetVersion
I0716 17:43:03.847123 3711 main.go:141] libmachine: Using API Version 1
I0716 17:43:03.847136 3711 main.go:141] libmachine: () Calling .SetConfigRaw
I0716 17:43:03.847351 3711 main.go:141] libmachine: () Calling .GetMachineName
I0716 17:43:03.847473 3711 main.go:141] libmachine: (ha-342000) Calling .DriverName
I0716 17:43:03.877086 3711 out.go:177] * Using the hyperkit driver based on existing profile
I0716 17:43:03.919053 3711 start.go:297] selected driver: hyperkit
I0716 17:43:03.919081 3711 start.go:901] validating driver "hyperkit" against &{Name:ha-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:ha-342000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0716 17:43:03.919308 3711 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0716 17:43:03.919511 3711 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0716 17:43:03.919712 3711 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19264-1062/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0716 17:43:03.929208 3711 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
I0716 17:43:03.933386 3711 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0716 17:43:03.933417 3711 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0716 17:43:03.936504 3711 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0716 17:43:03.936550 3711 cni.go:84] Creating CNI manager for ""
I0716 17:43:03.936556 3711 cni.go:136] multinode detected (3 nodes found), recommending kindnet
I0716 17:43:03.936640 3711 start.go:340] cluster config:
{Name:ha-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-342000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0716 17:43:03.936745 3711 iso.go:125] acquiring lock: {Name:mk733d144511fa2d8edc27b12f627ff991ad4bd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0716 17:43:03.978816 3711 out.go:177] * Starting "ha-342000" primary control-plane node in "ha-342000" cluster
I0716 17:43:03.999980 3711 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0716 17:43:04.000052 3711 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19264-1062/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
I0716 17:43:04.000102 3711 cache.go:56] Caching tarball of preloaded images
I0716 17:43:04.000398 3711 preload.go:172] Found /Users/jenkins/minikube-integration/19264-1062/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0716 17:43:04.000421 3711 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0716 17:43:04.000597 3711 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19264-1062/.minikube/profiles/ha-342000/config.json ...
I0716 17:43:04.001392 3711 start.go:360] acquireMachinesLock for ha-342000: {Name:mkbf6dc07694066e6532011ee825dd1de2f50a27 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0716 17:43:04.001510 3711 start.go:364] duration metric: took 93.714µs to acquireMachinesLock for "ha-342000"
I0716 17:43:04.001544 3711 start.go:96] Skipping create...Using existing machine configuration
I0716 17:43:04.001558 3711 fix.go:54] fixHost starting:
I0716 17:43:04.001932 3711 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0716 17:43:04.001967 3711 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0716 17:43:04.012553 3711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52054
I0716 17:43:04.012950 3711 main.go:141] libmachine: () Calling .GetVersion
I0716 17:43:04.013329 3711 main.go:141] libmachine: Using API Version 1
I0716 17:43:04.013345 3711 main.go:141] libmachine: () Calling .SetConfigRaw
I0716 17:43:04.013612 3711 main.go:141] libmachine: () Calling .GetMachineName
I0716 17:43:04.013735 3711 main.go:141] libmachine: (ha-342000) Calling .DriverName
I0716 17:43:04.013833 3711 main.go:141] libmachine: (ha-342000) Calling .GetState
I0716 17:43:04.013923 3711 main.go:141] libmachine: (ha-342000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0716 17:43:04.014016 3711 main.go:141] libmachine: (ha-342000) DBG | hyperkit pid from json: 3619
I0716 17:43:04.014976 3711 main.go:141] libmachine: (ha-342000) DBG | hyperkit pid 3619 missing from process table
I0716 17:43:04.015009 3711 fix.go:112] recreateIfNeeded on ha-342000: state=Stopped err=<nil>
I0716 17:43:04.015026 3711 main.go:141] libmachine: (ha-342000) Calling .DriverName
W0716 17:43:04.015106 3711 fix.go:138] unexpected machine state, will restart: <nil>
I0716 17:43:04.056735 3711 out.go:177] * Restarting existing hyperkit VM for "ha-342000" ...
I0716 17:43:04.078170 3711 main.go:141] libmachine: (ha-342000) Calling .Start
I0716 17:43:04.078427 3711 main.go:141] libmachine: (ha-342000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0716 17:43:04.078466 3711 main.go:141] libmachine: (ha-342000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/hyperkit.pid
I0716 17:43:04.080287 3711 main.go:141] libmachine: (ha-342000) DBG | hyperkit pid 3619 missing from process table
I0716 17:43:04.080301 3711 main.go:141] libmachine: (ha-342000) DBG | pid 3619 is in state "Stopped"
I0716 17:43:04.080317 3711 main.go:141] libmachine: (ha-342000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/hyperkit.pid...
I0716 17:43:04.080611 3711 main.go:141] libmachine: (ha-342000) DBG | Using UUID 36700daa-d8f3-43ed-9fb8-5f8383e6acd9
I0716 17:43:04.202191 3711 main.go:141] libmachine: (ha-342000) DBG | Generated MAC 9a:e4:e3:d8:13:27
I0716 17:43:04.202217 3711 main.go:141] libmachine: (ha-342000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-342000
I0716 17:43:04.202330 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36700daa-d8f3-43ed-9fb8-5f8383e6acd9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acf30)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0716 17:43:04.202356 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"36700daa-d8f3-43ed-9fb8-5f8383e6acd9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acf30)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0716 17:43:04.202398 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "36700daa-d8f3-43ed-9fb8-5f8383e6acd9", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/ha-342000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/tty,log=/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/bzimage,/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-342000"}
I0716 17:43:04.202448 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 36700daa-d8f3-43ed-9fb8-5f8383e6acd9 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/ha-342000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/tty,log=/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/console-ring -f kexec,/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/bzimage,/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-342000"
I0716 17:43:04.202461 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0716 17:43:04.203858 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 DEBUG: hyperkit: Pid is 3726
I0716 17:43:04.204298 3711 main.go:141] libmachine: (ha-342000) DBG | Attempt 0
I0716 17:43:04.204311 3711 main.go:141] libmachine: (ha-342000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0716 17:43:04.204389 3711 main.go:141] libmachine: (ha-342000) DBG | hyperkit pid from json: 3726
I0716 17:43:04.206193 3711 main.go:141] libmachine: (ha-342000) DBG | Searching for 9a:e4:e3:d8:13:27 in /var/db/dhcpd_leases ...
I0716 17:43:04.206260 3711 main.go:141] libmachine: (ha-342000) DBG | Found 7 entries in /var/db/dhcpd_leases!
I0716 17:43:04.206281 3711 main.go:141] libmachine: (ha-342000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:b6:26:72:17:e5:ef ID:1,b6:26:72:17:e5:ef Lease:0x6697137c}
I0716 17:43:04.206295 3711 main.go:141] libmachine: (ha-342000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:de:be:e3:6e:8f:c1 ID:1,de:be:e3:6e:8f:c1 Lease:0x669864e5}
I0716 17:43:04.206308 3711 main.go:141] libmachine: (ha-342000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:6:c0:af:2e:9c:e5 ID:1,6:c0:af:2e:9c:e5 Lease:0x66986480}
I0716 17:43:04.206324 3711 main.go:141] libmachine: (ha-342000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:9a:e4:e3:d8:13:27 ID:1,9a:e4:e3:d8:13:27 Lease:0x6698646c}
I0716 17:43:04.206333 3711 main.go:141] libmachine: (ha-342000) DBG | Found match: 9a:e4:e3:d8:13:27
I0716 17:43:04.206343 3711 main.go:141] libmachine: (ha-342000) DBG | IP: 192.169.0.5
I0716 17:43:04.206363 3711 main.go:141] libmachine: (ha-342000) Calling .GetConfigRaw
I0716 17:43:04.207115 3711 main.go:141] libmachine: (ha-342000) Calling .GetIP
I0716 17:43:04.207283 3711 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19264-1062/.minikube/profiles/ha-342000/config.json ...
I0716 17:43:04.207715 3711 machine.go:94] provisionDockerMachine start ...
I0716 17:43:04.207725 3711 main.go:141] libmachine: (ha-342000) Calling .DriverName
I0716 17:43:04.207867 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHHostname
I0716 17:43:04.207992 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHPort
I0716 17:43:04.208079 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:04.208193 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:04.208318 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHUsername
I0716 17:43:04.208466 3711 main.go:141] libmachine: Using SSH client type: native
I0716 17:43:04.208697 3711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x110ae060] 0x110b0dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0716 17:43:04.208707 3711 main.go:141] libmachine: About to run SSH command:
hostname
I0716 17:43:04.211812 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0716 17:43:04.266428 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0716 17:43:04.267129 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0716 17:43:04.267145 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0716 17:43:04.267168 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0716 17:43:04.267182 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0716 17:43:04.646718 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0716 17:43:04.646733 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0716 17:43:04.761313 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0716 17:43:04.761334 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0716 17:43:04.761344 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0716 17:43:04.761353 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0716 17:43:04.762228 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0716 17:43:04.762238 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0716 17:43:10.024951 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:10 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0716 17:43:10.025031 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:10 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0716 17:43:10.025041 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:10 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0716 17:43:10.049605 3711 main.go:141] libmachine: (ha-342000) DBG | 2024/07/16 17:43:10 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
I0716 17:43:39.283813 3711 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0716 17:43:39.283828 3711 main.go:141] libmachine: (ha-342000) Calling .GetMachineName
I0716 17:43:39.284005 3711 buildroot.go:166] provisioning hostname "ha-342000"
I0716 17:43:39.284016 3711 main.go:141] libmachine: (ha-342000) Calling .GetMachineName
I0716 17:43:39.284108 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHHostname
I0716 17:43:39.284222 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHPort
I0716 17:43:39.284351 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:39.284430 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:39.284512 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHUsername
I0716 17:43:39.284631 3711 main.go:141] libmachine: Using SSH client type: native
I0716 17:43:39.284778 3711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x110ae060] 0x110b0dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0716 17:43:39.284786 3711 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-342000 && echo "ha-342000" | sudo tee /etc/hostname
I0716 17:43:39.361786 3711 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-342000
I0716 17:43:39.361814 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHHostname
I0716 17:43:39.361947 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHPort
I0716 17:43:39.362036 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:39.362130 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:39.362212 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHUsername
I0716 17:43:39.362333 3711 main.go:141] libmachine: Using SSH client type: native
I0716 17:43:39.362497 3711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x110ae060] 0x110b0dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0716 17:43:39.362508 3711 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-342000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-342000/g' /etc/hosts;
else
echo '127.0.1.1 ha-342000' | sudo tee -a /etc/hosts;
fi
fi
I0716 17:43:39.433168 3711 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0716 17:43:39.433193 3711 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19264-1062/.minikube CaCertPath:/Users/jenkins/minikube-integration/19264-1062/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19264-1062/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19264-1062/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19264-1062/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19264-1062/.minikube}
I0716 17:43:39.433214 3711 buildroot.go:174] setting up certificates
I0716 17:43:39.433225 3711 provision.go:84] configureAuth start
I0716 17:43:39.433232 3711 main.go:141] libmachine: (ha-342000) Calling .GetMachineName
I0716 17:43:39.433368 3711 main.go:141] libmachine: (ha-342000) Calling .GetIP
I0716 17:43:39.433489 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHHostname
I0716 17:43:39.433573 3711 provision.go:143] copyHostCerts
I0716 17:43:39.433609 3711 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19264-1062/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19264-1062/.minikube/cert.pem
I0716 17:43:39.433682 3711 exec_runner.go:144] found /Users/jenkins/minikube-integration/19264-1062/.minikube/cert.pem, removing ...
I0716 17:43:39.433690 3711 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19264-1062/.minikube/cert.pem
I0716 17:43:39.433953 3711 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19264-1062/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19264-1062/.minikube/cert.pem (1123 bytes)
I0716 17:43:39.434166 3711 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19264-1062/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19264-1062/.minikube/key.pem
I0716 17:43:39.434208 3711 exec_runner.go:144] found /Users/jenkins/minikube-integration/19264-1062/.minikube/key.pem, removing ...
I0716 17:43:39.434213 3711 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19264-1062/.minikube/key.pem
I0716 17:43:39.434297 3711 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19264-1062/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19264-1062/.minikube/key.pem (1679 bytes)
I0716 17:43:39.434440 3711 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19264-1062/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19264-1062/.minikube/ca.pem
I0716 17:43:39.434484 3711 exec_runner.go:144] found /Users/jenkins/minikube-integration/19264-1062/.minikube/ca.pem, removing ...
I0716 17:43:39.434489 3711 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19264-1062/.minikube/ca.pem
I0716 17:43:39.434610 3711 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19264-1062/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19264-1062/.minikube/ca.pem (1078 bytes)
I0716 17:43:39.434750 3711 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19264-1062/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19264-1062/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19264-1062/.minikube/certs/ca-key.pem org=jenkins.ha-342000 san=[127.0.0.1 192.169.0.5 ha-342000 localhost minikube]
I0716 17:43:39.656912 3711 provision.go:177] copyRemoteCerts
I0716 17:43:39.656973 3711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0716 17:43:39.656989 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHHostname
I0716 17:43:39.657128 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHPort
I0716 17:43:39.657230 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:39.657339 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHUsername
I0716 17:43:39.657433 3711 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/id_rsa Username:docker}
I0716 17:43:39.696981 3711 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19264-1062/.minikube/machines/server.pem -> /etc/docker/server.pem
I0716 17:43:39.697047 3711 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19264-1062/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0716 17:43:39.717074 3711 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19264-1062/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0716 17:43:39.717138 3711 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19264-1062/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0716 17:43:39.736964 3711 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19264-1062/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0716 17:43:39.737035 3711 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19264-1062/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0716 17:43:39.756571 3711 provision.go:87] duration metric: took 323.339299ms to configureAuth
I0716 17:43:39.756584 3711 buildroot.go:189] setting minikube options for container-runtime
I0716 17:43:39.756748 3711 config.go:182] Loaded profile config "ha-342000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0716 17:43:39.756761 3711 main.go:141] libmachine: (ha-342000) Calling .DriverName
I0716 17:43:39.756893 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHHostname
I0716 17:43:39.756981 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHPort
I0716 17:43:39.757066 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:39.757148 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:39.757240 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHUsername
I0716 17:43:39.757352 3711 main.go:141] libmachine: Using SSH client type: native
I0716 17:43:39.757476 3711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x110ae060] 0x110b0dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0716 17:43:39.757484 3711 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0716 17:43:39.823839 3711 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0716 17:43:39.823856 3711 buildroot.go:70] root file system type: tmpfs
I0716 17:43:39.823931 3711 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0716 17:43:39.823943 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHHostname
I0716 17:43:39.824075 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHPort
I0716 17:43:39.824164 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:39.824254 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:39.824334 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHUsername
I0716 17:43:39.824453 3711 main.go:141] libmachine: Using SSH client type: native
I0716 17:43:39.824603 3711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x110ae060] 0x110b0dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0716 17:43:39.824652 3711 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0716 17:43:39.900128 3711 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0716 17:43:39.900150 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHHostname
I0716 17:43:39.900287 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHPort
I0716 17:43:39.900387 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:39.900473 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:39.900547 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHUsername
I0716 17:43:39.900665 3711 main.go:141] libmachine: Using SSH client type: native
I0716 17:43:39.900813 3711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x110ae060] 0x110b0dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0716 17:43:39.900825 3711 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0716 17:43:41.624192 3711 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0716 17:43:41.624207 3711 machine.go:97] duration metric: took 37.417127708s to provisionDockerMachine
I0716 17:43:41.624215 3711 start.go:293] postStartSetup for "ha-342000" (driver="hyperkit")
I0716 17:43:41.624221 3711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0716 17:43:41.624231 3711 main.go:141] libmachine: (ha-342000) Calling .DriverName
I0716 17:43:41.624436 3711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0716 17:43:41.624454 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHHostname
I0716 17:43:41.624541 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHPort
I0716 17:43:41.624635 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:41.624714 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHUsername
I0716 17:43:41.624801 3711 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/id_rsa Username:docker}
I0716 17:43:41.666095 3711 ssh_runner.go:195] Run: cat /etc/os-release
I0716 17:43:41.669163 3711 info.go:137] Remote host: Buildroot 2023.02.9
I0716 17:43:41.669175 3711 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19264-1062/.minikube/addons for local assets ...
I0716 17:43:41.669272 3711 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19264-1062/.minikube/files for local assets ...
I0716 17:43:41.669457 3711 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19264-1062/.minikube/files/etc/ssl/certs/16852.pem -> 16852.pem in /etc/ssl/certs
I0716 17:43:41.669464 3711 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19264-1062/.minikube/files/etc/ssl/certs/16852.pem -> /etc/ssl/certs/16852.pem
I0716 17:43:41.669677 3711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0716 17:43:41.677415 3711 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19264-1062/.minikube/files/etc/ssl/certs/16852.pem --> /etc/ssl/certs/16852.pem (1708 bytes)
I0716 17:43:41.696880 3711 start.go:296] duration metric: took 72.658813ms for postStartSetup
I0716 17:43:41.696902 3711 main.go:141] libmachine: (ha-342000) Calling .DriverName
I0716 17:43:41.697080 3711 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0716 17:43:41.697092 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHHostname
I0716 17:43:41.697181 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHPort
I0716 17:43:41.697267 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:41.697344 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHUsername
I0716 17:43:41.697435 3711 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/id_rsa Username:docker}
I0716 17:43:41.737806 3711 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0716 17:43:41.737863 3711 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0716 17:43:41.791627 3711 fix.go:56] duration metric: took 37.790720665s for fixHost
I0716 17:43:41.791648 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHHostname
I0716 17:43:41.791785 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHPort
I0716 17:43:41.791906 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:41.791990 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:41.792087 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHUsername
I0716 17:43:41.792234 3711 main.go:141] libmachine: Using SSH client type: native
I0716 17:43:41.792379 3711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x110ae060] 0x110b0dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0716 17:43:41.792386 3711 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0716 17:43:41.858392 3711 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721177021.961808659
I0716 17:43:41.858403 3711 fix.go:216] guest clock: 1721177021.961808659
I0716 17:43:41.858408 3711 fix.go:229] Guest: 2024-07-16 17:43:41.961808659 -0700 PDT Remote: 2024-07-16 17:43:41.791638 -0700 PDT m=+38.326106877 (delta=170.170659ms)
I0716 17:43:41.858423 3711 fix.go:200] guest clock delta is within tolerance: 170.170659ms
I0716 17:43:41.858427 3711 start.go:83] releasing machines lock for "ha-342000", held for 37.857556141s
I0716 17:43:41.858454 3711 main.go:141] libmachine: (ha-342000) Calling .DriverName
I0716 17:43:41.858591 3711 main.go:141] libmachine: (ha-342000) Calling .GetIP
I0716 17:43:41.858721 3711 main.go:141] libmachine: (ha-342000) Calling .DriverName
I0716 17:43:41.859062 3711 main.go:141] libmachine: (ha-342000) Calling .DriverName
I0716 17:43:41.859180 3711 main.go:141] libmachine: (ha-342000) Calling .DriverName
I0716 17:43:41.859250 3711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0716 17:43:41.859285 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHHostname
I0716 17:43:41.859324 3711 ssh_runner.go:195] Run: cat /version.json
I0716 17:43:41.859335 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHHostname
I0716 17:43:41.859380 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHPort
I0716 17:43:41.859442 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHPort
I0716 17:43:41.859471 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:41.859554 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHKeyPath
I0716 17:43:41.859579 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHUsername
I0716 17:43:41.859673 3711 main.go:141] libmachine: (ha-342000) Calling .GetSSHUsername
I0716 17:43:41.859688 3711 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/id_rsa Username:docker}
I0716 17:43:41.859758 3711 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19264-1062/.minikube/machines/ha-342000/id_rsa Username:docker}
I0716 17:43:41.899521 3711 ssh_runner.go:195] Run: systemctl --version
I0716 17:43:41.947020 3711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0716 17:43:41.952040 3711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0716 17:43:41.952077 3711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0716 17:43:41.965502 3711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0716 17:43:41.965513 3711 start.go:495] detecting cgroup driver to use...
I0716 17:43:41.965610 3711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0716 17:43:41.983194 3711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0716 17:43:41.992159 3711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0716 17:43:42.002505 3711 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0716 17:43:42.002552 3711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0716 17:43:42.011606 3711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0716 17:43:42.021149 3711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0716 17:43:42.029283 3711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0716 17:43:42.037485 3711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0716 17:43:42.045716 3711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0716 17:43:42.053825 3711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0716 17:43:42.062023 3711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0716 17:43:42.070318 3711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0716 17:43:42.077626 3711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0716 17:43:42.085151 3711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0716 17:43:42.176553 3711 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0716 17:43:42.195628 3711 start.go:495] detecting cgroup driver to use...
I0716 17:43:42.195701 3711 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0716 17:43:42.216968 3711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0716 17:43:42.231624 3711 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0716 17:43:42.250629 3711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0716 17:43:42.261703 3711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0716 17:43:42.272364 3711 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0716 17:43:42.293429 3711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0716 17:43:42.303667 3711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0716 17:43:42.319015 3711 ssh_runner.go:195] Run: which cri-dockerd
I0716 17:43:42.321929 3711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0716 17:43:42.329101 3711 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0716 17:43:42.342434 3711 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0716 17:43:42.436296 3711 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0716 17:43:42.545617 3711 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0716 17:43:42.545681 3711 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0716 17:43:42.559865 3711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0716 17:43:42.654915 3711 ssh_runner.go:195] Run: sudo systemctl restart docker
I0716 17:44:43.699084 3711 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.045203405s)
I0716 17:44:43.699143 3711 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0716 17:44:43.734666 3711 out.go:177]
W0716 17:44:43.755675 3711 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
Jul 17 00:43:39 ha-342000 systemd[1]: Starting Docker Application Container Engine...
Jul 17 00:43:39 ha-342000 dockerd[510]: time="2024-07-17T00:43:39.558200075Z" level=info msg="Starting up"
Jul 17 00:43:39 ha-342000 dockerd[510]: time="2024-07-17T00:43:39.558842124Z" level=info msg="containerd not running, starting managed containerd"
Jul 17 00:43:39 ha-342000 dockerd[510]: time="2024-07-17T00:43:39.559438066Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=516
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.575924797Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591084520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591184857Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591251886Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591287480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591403916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591459500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591587704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591627844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591658585Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591686574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591784138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591940724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.593474996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.593523918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.593643884Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.593684748Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.593780568Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.593828177Z" level=info msg="metadata content store policy set" policy=shared
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595349221Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595406884Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595442652Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595501303Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595543842Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595614113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595817716Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595895374Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595930627Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595960635Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596023274Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596064317Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596096610Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596136248Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596171216Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596200876Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596230211Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596258894Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596300576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596332605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596362011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596391853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596422898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596454723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596486297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596520963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596550426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596580975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596609563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596638202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596666393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596696889Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596736953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596770947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596802794Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596874243Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596918750Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596969484Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.597006564Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.597036510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.597065037Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.597170749Z" level=info msg="NRI interface is disabled by configuration."
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.597329691Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.597389608Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.597441245Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.597480527Z" level=info msg="containerd successfully booted in 0.022562s"
Jul 17 00:43:40 ha-342000 dockerd[510]: time="2024-07-17T00:43:40.581453174Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jul 17 00:43:40 ha-342000 dockerd[510]: time="2024-07-17T00:43:40.627328440Z" level=info msg="Loading containers: start."
Jul 17 00:43:40 ha-342000 dockerd[510]: time="2024-07-17T00:43:40.828875041Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Jul 17 00:43:40 ha-342000 dockerd[510]: time="2024-07-17T00:43:40.891186630Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jul 17 00:43:41 ha-342000 dockerd[510]: time="2024-07-17T00:43:41.693110526Z" level=warning msg="error locating sandbox id 48b8604de0b17187862a7c007a07efef9bbde0f6a177993a7eb33a82fb6a8c37: sandbox 48b8604de0b17187862a7c007a07efef9bbde0f6a177993a7eb33a82fb6a8c37 not found"
Jul 17 00:43:41 ha-342000 dockerd[510]: time="2024-07-17T00:43:41.693414832Z" level=info msg="Loading containers: done."
Jul 17 00:43:41 ha-342000 dockerd[510]: time="2024-07-17T00:43:41.704030068Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
Jul 17 00:43:41 ha-342000 dockerd[510]: time="2024-07-17T00:43:41.704193142Z" level=info msg="Daemon has completed initialization"
Jul 17 00:43:41 ha-342000 dockerd[510]: time="2024-07-17T00:43:41.726147971Z" level=info msg="API listen on /var/run/docker.sock"
Jul 17 00:43:41 ha-342000 dockerd[510]: time="2024-07-17T00:43:41.726241436Z" level=info msg="API listen on [::]:2376"
Jul 17 00:43:41 ha-342000 systemd[1]: Started Docker Application Container Engine.
Jul 17 00:43:42 ha-342000 dockerd[510]: time="2024-07-17T00:43:42.770781644Z" level=info msg="Processing signal 'terminated'"
Jul 17 00:43:42 ha-342000 dockerd[510]: time="2024-07-17T00:43:42.771826249Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jul 17 00:43:42 ha-342000 dockerd[510]: time="2024-07-17T00:43:42.772177484Z" level=info msg="Daemon shutdown complete"
Jul 17 00:43:42 ha-342000 dockerd[510]: time="2024-07-17T00:43:42.772220185Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jul 17 00:43:42 ha-342000 dockerd[510]: time="2024-07-17T00:43:42.772220368Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jul 17 00:43:42 ha-342000 systemd[1]: Stopping Docker Application Container Engine...
Jul 17 00:43:43 ha-342000 systemd[1]: docker.service: Deactivated successfully.
Jul 17 00:43:43 ha-342000 systemd[1]: Stopped Docker Application Container Engine.
Jul 17 00:43:43 ha-342000 systemd[1]: Starting Docker Application Container Engine...
Jul 17 00:43:43 ha-342000 dockerd[1116]: time="2024-07-17T00:43:43.810331520Z" level=info msg="Starting up"
Jul 17 00:44:43 ha-342000 dockerd[1116]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 17 00:44:43 ha-342000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 17 00:44:43 ha-342000 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 17 00:44:43 ha-342000 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
Jul 17 00:43:39 ha-342000 systemd[1]: Starting Docker Application Container Engine...
Jul 17 00:43:39 ha-342000 dockerd[510]: time="2024-07-17T00:43:39.558200075Z" level=info msg="Starting up"
Jul 17 00:43:39 ha-342000 dockerd[510]: time="2024-07-17T00:43:39.558842124Z" level=info msg="containerd not running, starting managed containerd"
Jul 17 00:43:39 ha-342000 dockerd[510]: time="2024-07-17T00:43:39.559438066Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=516
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.575924797Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591084520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591184857Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591251886Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591287480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591403916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591459500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591587704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591627844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591658585Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591686574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591784138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.591940724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.593474996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.593523918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.593643884Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.593684748Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.593780568Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.593828177Z" level=info msg="metadata content store policy set" policy=shared
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595349221Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595406884Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595442652Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595501303Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595543842Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595614113Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595817716Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595895374Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595930627Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.595960635Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596023274Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596064317Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596096610Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596136248Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596171216Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596200876Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596230211Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596258894Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596300576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596332605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596362011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596391853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596422898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596454723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596486297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596520963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596550426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596580975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596609563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596638202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596666393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596696889Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596736953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596770947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596802794Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596874243Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596918750Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.596969484Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.597006564Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.597036510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.597065037Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.597170749Z" level=info msg="NRI interface is disabled by configuration."
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.597329691Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.597389608Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.597441245Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jul 17 00:43:39 ha-342000 dockerd[516]: time="2024-07-17T00:43:39.597480527Z" level=info msg="containerd successfully booted in 0.022562s"
Jul 17 00:43:40 ha-342000 dockerd[510]: time="2024-07-17T00:43:40.581453174Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jul 17 00:43:40 ha-342000 dockerd[510]: time="2024-07-17T00:43:40.627328440Z" level=info msg="Loading containers: start."
Jul 17 00:43:40 ha-342000 dockerd[510]: time="2024-07-17T00:43:40.828875041Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Jul 17 00:43:40 ha-342000 dockerd[510]: time="2024-07-17T00:43:40.891186630Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jul 17 00:43:41 ha-342000 dockerd[510]: time="2024-07-17T00:43:41.693110526Z" level=warning msg="error locating sandbox id 48b8604de0b17187862a7c007a07efef9bbde0f6a177993a7eb33a82fb6a8c37: sandbox 48b8604de0b17187862a7c007a07efef9bbde0f6a177993a7eb33a82fb6a8c37 not found"
Jul 17 00:43:41 ha-342000 dockerd[510]: time="2024-07-17T00:43:41.693414832Z" level=info msg="Loading containers: done."
Jul 17 00:43:41 ha-342000 dockerd[510]: time="2024-07-17T00:43:41.704030068Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
Jul 17 00:43:41 ha-342000 dockerd[510]: time="2024-07-17T00:43:41.704193142Z" level=info msg="Daemon has completed initialization"
Jul 17 00:43:41 ha-342000 dockerd[510]: time="2024-07-17T00:43:41.726147971Z" level=info msg="API listen on /var/run/docker.sock"
Jul 17 00:43:41 ha-342000 dockerd[510]: time="2024-07-17T00:43:41.726241436Z" level=info msg="API listen on [::]:2376"
Jul 17 00:43:41 ha-342000 systemd[1]: Started Docker Application Container Engine.
Jul 17 00:43:42 ha-342000 dockerd[510]: time="2024-07-17T00:43:42.770781644Z" level=info msg="Processing signal 'terminated'"
Jul 17 00:43:42 ha-342000 dockerd[510]: time="2024-07-17T00:43:42.771826249Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jul 17 00:43:42 ha-342000 dockerd[510]: time="2024-07-17T00:43:42.772177484Z" level=info msg="Daemon shutdown complete"
Jul 17 00:43:42 ha-342000 dockerd[510]: time="2024-07-17T00:43:42.772220185Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jul 17 00:43:42 ha-342000 dockerd[510]: time="2024-07-17T00:43:42.772220368Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jul 17 00:43:42 ha-342000 systemd[1]: Stopping Docker Application Container Engine...
Jul 17 00:43:43 ha-342000 systemd[1]: docker.service: Deactivated successfully.
Jul 17 00:43:43 ha-342000 systemd[1]: Stopped Docker Application Container Engine.
Jul 17 00:43:43 ha-342000 systemd[1]: Starting Docker Application Container Engine...
Jul 17 00:43:43 ha-342000 dockerd[1116]: time="2024-07-17T00:43:43.810331520Z" level=info msg="Starting up"
Jul 17 00:44:43 ha-342000 dockerd[1116]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 17 00:44:43 ha-342000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 17 00:44:43 ha-342000 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 17 00:44:43 ha-342000 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
W0716 17:44:43.755788 3711 out.go:239] *
*
W0716 17:44:43.758123 3711 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0716 17:44:43.819403 3711 out.go:177]
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-amd64 start -p ha-342000 --wait=true -v=7 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-342000 -n ha-342000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-342000 -n ha-342000: exit status 6 (150.309583ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E0716 17:44:44.012429 3742 status.go:417] kubeconfig endpoint: get endpoint: "ha-342000" does not appear in /Users/jenkins/minikube-integration/19264-1062/kubeconfig
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-342000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (100.58s)