=== RUN TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run: out/minikube-darwin-amd64 node list -p ha-572000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run: out/minikube-darwin-amd64 stop -p ha-572000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-572000 -v=7 --alsologtostderr: (27.058696736s)
ha_test.go:467: (dbg) Run: out/minikube-darwin-amd64 start -p ha-572000 --wait=true -v=7 --alsologtostderr
E0717 10:33:50.055435 1639 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/functional-325000/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-572000 --wait=true -v=7 --alsologtostderr: exit status 90 (1m34.376859448s)
-- stdout --
* [ha-572000] minikube v1.33.1 on Darwin 14.5
- MINIKUBE_LOCATION=19283
- KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the hyperkit driver based on existing profile
* Starting "ha-572000" primary control-plane node in "ha-572000" cluster
* Restarting existing hyperkit VM for "ha-572000" ...
* Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
* Enabled addons:
* Starting "ha-572000-m02" control-plane node in "ha-572000" cluster
* Restarting existing hyperkit VM for "ha-572000-m02" ...
* Found network options:
- NO_PROXY=192.169.0.5
-- /stdout --
** stderr **
I0717 10:32:37.218202 3508 out.go:291] Setting OutFile to fd 1 ...
I0717 10:32:37.218482 3508 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:32:37.218488 3508 out.go:304] Setting ErrFile to fd 2...
I0717 10:32:37.218492 3508 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:32:37.218678 3508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
I0717 10:32:37.220111 3508 out.go:298] Setting JSON to false
I0717 10:32:37.243881 3508 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1928,"bootTime":1721235629,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
W0717 10:32:37.243971 3508 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0717 10:32:37.265852 3508 out.go:177] * [ha-572000] minikube v1.33.1 on Darwin 14.5
I0717 10:32:37.307717 3508 out.go:177] - MINIKUBE_LOCATION=19283
I0717 10:32:37.307783 3508 notify.go:220] Checking for updates...
I0717 10:32:37.352082 3508 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
I0717 10:32:37.394723 3508 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0717 10:32:37.416561 3508 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0717 10:32:37.437566 3508 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
I0717 10:32:37.458758 3508 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0717 10:32:37.480259 3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:32:37.480391 3508 driver.go:392] Setting default libvirt URI to qemu:///system
I0717 10:32:37.481074 3508 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0717 10:32:37.481147 3508 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0717 10:32:37.491120 3508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51745
I0717 10:32:37.491492 3508 main.go:141] libmachine: () Calling .GetVersion
I0717 10:32:37.491919 3508 main.go:141] libmachine: Using API Version 1
I0717 10:32:37.491928 3508 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 10:32:37.492189 3508 main.go:141] libmachine: () Calling .GetMachineName
I0717 10:32:37.492307 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:37.520549 3508 out.go:177] * Using the hyperkit driver based on existing profile
I0717 10:32:37.563535 3508 start.go:297] selected driver: hyperkit
I0717 10:32:37.563555 3508 start.go:901] validating driver "hyperkit" against &{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 10:32:37.563770 3508 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0717 10:32:37.563903 3508 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0717 10:32:37.564063 3508 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19283-1099/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0717 10:32:37.572774 3508 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
I0717 10:32:37.578697 3508 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0717 10:32:37.578722 3508 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0717 10:32:37.582004 3508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0717 10:32:37.582058 3508 cni.go:84] Creating CNI manager for ""
I0717 10:32:37.582066 3508 cni.go:136] multinode detected (4 nodes found), recommending kindnet
I0717 10:32:37.582150 3508 start.go:340] cluster config:
{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 10:32:37.582277 3508 iso.go:125] acquiring lock: {Name:mkf51f842bcc8a77e9c7c50d642c4c76848e96af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0717 10:32:37.624644 3508 out.go:177] * Starting "ha-572000" primary control-plane node in "ha-572000" cluster
I0717 10:32:37.645662 3508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0717 10:32:37.645750 3508 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
I0717 10:32:37.645778 3508 cache.go:56] Caching tarball of preloaded images
I0717 10:32:37.645983 3508 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0717 10:32:37.646002 3508 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0717 10:32:37.646175 3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
I0717 10:32:37.647084 3508 start.go:360] acquireMachinesLock for ha-572000: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0717 10:32:37.647209 3508 start.go:364] duration metric: took 99.885µs to acquireMachinesLock for "ha-572000"
I0717 10:32:37.647240 3508 start.go:96] Skipping create...Using existing machine configuration
I0717 10:32:37.647261 3508 fix.go:54] fixHost starting:
I0717 10:32:37.647673 3508 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0717 10:32:37.647700 3508 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0717 10:32:37.656651 3508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51747
I0717 10:32:37.657021 3508 main.go:141] libmachine: () Calling .GetVersion
I0717 10:32:37.657336 3508 main.go:141] libmachine: Using API Version 1
I0717 10:32:37.657346 3508 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 10:32:37.657590 3508 main.go:141] libmachine: () Calling .GetMachineName
I0717 10:32:37.657719 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:37.657832 3508 main.go:141] libmachine: (ha-572000) Calling .GetState
I0717 10:32:37.657936 3508 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0717 10:32:37.658021 3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 2926
I0717 10:32:37.658989 3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 2926 missing from process table
I0717 10:32:37.658986 3508 fix.go:112] recreateIfNeeded on ha-572000: state=Stopped err=<nil>
I0717 10:32:37.659004 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
W0717 10:32:37.659109 3508 fix.go:138] unexpected machine state, will restart: <nil>
I0717 10:32:37.701727 3508 out.go:177] * Restarting existing hyperkit VM for "ha-572000" ...
I0717 10:32:37.722485 3508 main.go:141] libmachine: (ha-572000) Calling .Start
I0717 10:32:37.722730 3508 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0717 10:32:37.722799 3508 main.go:141] libmachine: (ha-572000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid
I0717 10:32:37.724830 3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 2926 missing from process table
I0717 10:32:37.724872 3508 main.go:141] libmachine: (ha-572000) DBG | pid 2926 is in state "Stopped"
I0717 10:32:37.724889 3508 main.go:141] libmachine: (ha-572000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid...
I0717 10:32:37.725226 3508 main.go:141] libmachine: (ha-572000) DBG | Using UUID 5f2666de-0b32-4258-9840-7856c1bd4173
I0717 10:32:37.837447 3508 main.go:141] libmachine: (ha-572000) DBG | Generated MAC d2:a6:10:ad:80:98
I0717 10:32:37.837476 3508 main.go:141] libmachine: (ha-572000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
I0717 10:32:37.837593 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0717 10:32:37.837631 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0717 10:32:37.837679 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5f2666de-0b32-4258-9840-7856c1bd4173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
I0717 10:32:37.837720 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5f2666de-0b32-4258-9840-7856c1bd4173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
I0717 10:32:37.837736 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0717 10:32:37.839166 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Pid is 3521
I0717 10:32:37.839653 3508 main.go:141] libmachine: (ha-572000) DBG | Attempt 0
I0717 10:32:37.839674 3508 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0717 10:32:37.839714 3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3521
I0717 10:32:37.841412 3508 main.go:141] libmachine: (ha-572000) DBG | Searching for d2:a6:10:ad:80:98 in /var/db/dhcpd_leases ...
I0717 10:32:37.841498 3508 main.go:141] libmachine: (ha-572000) DBG | Found 7 entries in /var/db/dhcpd_leases!
I0717 10:32:37.841515 3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
I0717 10:32:37.841527 3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x6699517a}
I0717 10:32:37.841536 3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:d3:62:da:43:cf ID:1,6e:d3:62:da:43:cf Lease:0x669950e4}
I0717 10:32:37.841559 3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x66994ff6}
I0717 10:32:37.841570 3508 main.go:141] libmachine: (ha-572000) DBG | Found match: d2:a6:10:ad:80:98
I0717 10:32:37.841595 3508 main.go:141] libmachine: (ha-572000) DBG | IP: 192.169.0.5
I0717 10:32:37.841705 3508 main.go:141] libmachine: (ha-572000) Calling .GetConfigRaw
I0717 10:32:37.842357 3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
I0717 10:32:37.842580 3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
I0717 10:32:37.843052 3508 machine.go:94] provisionDockerMachine start ...
I0717 10:32:37.843065 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:37.843201 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:37.843303 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:37.843420 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:37.843572 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:37.843663 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:37.843791 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:32:37.844002 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0717 10:32:37.844014 3508 main.go:141] libmachine: About to run SSH command:
hostname
I0717 10:32:37.847060 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0717 10:32:37.898878 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0717 10:32:37.899633 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0717 10:32:37.899658 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0717 10:32:37.899668 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0717 10:32:37.899678 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0717 10:32:38.277909 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0717 10:32:38.277922 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0717 10:32:38.392613 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0717 10:32:38.392633 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0717 10:32:38.392644 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0717 10:32:38.392676 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0717 10:32:38.393519 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0717 10:32:38.393530 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0717 10:32:43.648108 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0717 10:32:43.648154 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0717 10:32:43.648161 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0717 10:32:43.672680 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
I0717 10:32:48.904402 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0717 10:32:48.904418 3508 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
I0717 10:32:48.904582 3508 buildroot.go:166] provisioning hostname "ha-572000"
I0717 10:32:48.904593 3508 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
I0717 10:32:48.904692 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:48.904776 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:48.904887 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:48.904976 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:48.905073 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:48.905225 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:32:48.905383 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0717 10:32:48.905392 3508 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-572000 && echo "ha-572000" | sudo tee /etc/hostname
I0717 10:32:48.967564 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000
I0717 10:32:48.967584 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:48.967740 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:48.967836 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:48.967934 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:48.968014 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:48.968132 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:32:48.968282 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0717 10:32:48.968293 3508 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-572000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000/g' /etc/hosts;
else
echo '127.0.1.1 ha-572000' | sudo tee -a /etc/hosts;
fi
fi
I0717 10:32:49.026313 3508 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0717 10:32:49.026336 3508 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
I0717 10:32:49.026353 3508 buildroot.go:174] setting up certificates
I0717 10:32:49.026367 3508 provision.go:84] configureAuth start
I0717 10:32:49.026375 3508 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
I0717 10:32:49.026507 3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
I0717 10:32:49.026613 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:49.026706 3508 provision.go:143] copyHostCerts
I0717 10:32:49.026741 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
I0717 10:32:49.026811 3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
I0717 10:32:49.026819 3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
I0717 10:32:49.026972 3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
I0717 10:32:49.027200 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
I0717 10:32:49.027231 3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
I0717 10:32:49.027236 3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
I0717 10:32:49.027325 3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
I0717 10:32:49.027487 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
I0717 10:32:49.027519 3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
I0717 10:32:49.027524 3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
I0717 10:32:49.027590 3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
I0717 10:32:49.027748 3508 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000 san=[127.0.0.1 192.169.0.5 ha-572000 localhost minikube]
I0717 10:32:49.085766 3508 provision.go:177] copyRemoteCerts
I0717 10:32:49.085812 3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0717 10:32:49.085827 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:49.086112 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:49.086217 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:49.086305 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:49.086395 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
I0717 10:32:49.120573 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0717 10:32:49.120648 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0717 10:32:49.139510 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
I0717 10:32:49.139585 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0717 10:32:49.158247 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0717 10:32:49.158317 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0717 10:32:49.177520 3508 provision.go:87] duration metric: took 151.137832ms to configureAuth
I0717 10:32:49.177532 3508 buildroot.go:189] setting minikube options for container-runtime
I0717 10:32:49.177693 3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:32:49.177706 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:49.177837 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:49.177945 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:49.178031 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:49.178106 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:49.178195 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:49.178315 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:32:49.178439 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0717 10:32:49.178454 3508 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0717 10:32:49.231928 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0717 10:32:49.231939 3508 buildroot.go:70] root file system type: tmpfs
I0717 10:32:49.232011 3508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0717 10:32:49.232025 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:49.232158 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:49.232247 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:49.232341 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:49.232427 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:49.232563 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:32:49.232710 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0717 10:32:49.232755 3508 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0717 10:32:49.295280 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0717 10:32:49.295308 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:49.295446 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:49.295550 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:49.295637 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:49.295723 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:49.295852 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:32:49.295991 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0717 10:32:49.296003 3508 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0717 10:32:50.972633 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0717 10:32:50.972648 3508 machine.go:97] duration metric: took 13.129388483s to provisionDockerMachine
I0717 10:32:50.972660 3508 start.go:293] postStartSetup for "ha-572000" (driver="hyperkit")
I0717 10:32:50.972668 3508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0717 10:32:50.972678 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:50.972893 3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0717 10:32:50.972908 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:50.973007 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:50.973108 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:50.973193 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:50.973281 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
I0717 10:32:51.011765 3508 ssh_runner.go:195] Run: cat /etc/os-release
I0717 10:32:51.016752 3508 info.go:137] Remote host: Buildroot 2023.02.9
I0717 10:32:51.016768 3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
I0717 10:32:51.016865 3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
I0717 10:32:51.017004 3508 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
I0717 10:32:51.017011 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
I0717 10:32:51.017179 3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0717 10:32:51.027779 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
I0717 10:32:51.057568 3508 start.go:296] duration metric: took 84.89741ms for postStartSetup
I0717 10:32:51.057590 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:51.057768 3508 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0717 10:32:51.057780 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:51.057871 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:51.057953 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:51.058038 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:51.058120 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
I0717 10:32:51.090670 3508 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0717 10:32:51.090728 3508 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0717 10:32:51.124190 3508 fix.go:56] duration metric: took 13.476731728s for fixHost
I0717 10:32:51.124211 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:51.124344 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:51.124460 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:51.124556 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:51.124646 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:51.124769 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:32:51.124925 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0717 10:32:51.124933 3508 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0717 10:32:51.178019 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237571.303332168
I0717 10:32:51.178031 3508 fix.go:216] guest clock: 1721237571.303332168
I0717 10:32:51.178046 3508 fix.go:229] Guest: 2024-07-17 10:32:51.303332168 -0700 PDT Remote: 2024-07-17 10:32:51.124202 -0700 PDT m=+13.941974821 (delta=179.130168ms)
I0717 10:32:51.178065 3508 fix.go:200] guest clock delta is within tolerance: 179.130168ms
I0717 10:32:51.178069 3508 start.go:83] releasing machines lock for "ha-572000", held for 13.530645229s
I0717 10:32:51.178090 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:51.178220 3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
I0717 10:32:51.178321 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:51.178658 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:51.178764 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:51.178848 3508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0717 10:32:51.178881 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:51.178898 3508 ssh_runner.go:195] Run: cat /version.json
I0717 10:32:51.178911 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:51.178978 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:51.179001 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:51.179061 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:51.179087 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:51.179158 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:51.179178 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:51.179272 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
I0717 10:32:51.179286 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
I0717 10:32:51.214891 3508 ssh_runner.go:195] Run: systemctl --version
I0717 10:32:51.259994 3508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0717 10:32:51.264962 3508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0717 10:32:51.265002 3508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0717 10:32:51.277704 3508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0717 10:32:51.277717 3508 start.go:495] detecting cgroup driver to use...
I0717 10:32:51.277809 3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0717 10:32:51.295436 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0717 10:32:51.304332 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0717 10:32:51.313061 3508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0717 10:32:51.313115 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0717 10:32:51.321793 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 10:32:51.330506 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0717 10:32:51.339262 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 10:32:51.347997 3508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0717 10:32:51.356934 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0717 10:32:51.365798 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0717 10:32:51.374520 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0717 10:32:51.383330 3508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0717 10:32:51.391096 3508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0717 10:32:51.398988 3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 10:32:51.492043 3508 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0717 10:32:51.510670 3508 start.go:495] detecting cgroup driver to use...
I0717 10:32:51.510748 3508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0717 10:32:51.522109 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0717 10:32:51.533578 3508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0717 10:32:51.547583 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0717 10:32:51.558324 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0717 10:32:51.568495 3508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0717 10:32:51.586295 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0717 10:32:51.596174 3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0717 10:32:51.611388 3508 ssh_runner.go:195] Run: which cri-dockerd
I0717 10:32:51.614154 3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0717 10:32:51.621515 3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0717 10:32:51.636315 3508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0717 10:32:51.730805 3508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0717 10:32:51.833325 3508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0717 10:32:51.833396 3508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0717 10:32:51.849329 3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 10:32:51.950120 3508 ssh_runner.go:195] Run: sudo systemctl restart docker
I0717 10:32:54.304256 3508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.354082061s)
I0717 10:32:54.304312 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0717 10:32:54.314507 3508 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I0717 10:32:54.327160 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0717 10:32:54.337277 3508 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0717 10:32:54.428967 3508 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0717 10:32:54.528124 3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 10:32:54.629785 3508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0717 10:32:54.644492 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0717 10:32:54.655322 3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 10:32:54.750191 3508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0717 10:32:54.814687 3508 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0717 10:32:54.814779 3508 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0717 10:32:54.819517 3508 start.go:563] Will wait 60s for crictl version
I0717 10:32:54.819571 3508 ssh_runner.go:195] Run: which crictl
I0717 10:32:54.823230 3508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0717 10:32:54.848640 3508 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.0.3
RuntimeApiVersion: v1
I0717 10:32:54.848713 3508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0717 10:32:54.866198 3508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0717 10:32:54.925410 3508 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
I0717 10:32:54.925479 3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
I0717 10:32:54.925865 3508 ssh_runner.go:195] Run: grep 192.169.0.1 host.minikube.internal$ /etc/hosts
I0717 10:32:54.930367 3508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0717 10:32:54.939983 3508 kubeadm.go:883] updating cluster {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0717 10:32:54.940088 3508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0717 10:32:54.940151 3508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0717 10:32:54.953243 3508 docker.go:685] Got preloaded images: -- stdout --
kindest/kindnetd:v20240715-585640e9
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
ghcr.io/kube-vip/kube-vip:v0.8.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0717 10:32:54.953256 3508 docker.go:615] Images already preloaded, skipping extraction
I0717 10:32:54.953343 3508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0717 10:32:54.966247 3508 docker.go:685] Got preloaded images: -- stdout --
kindest/kindnetd:v20240715-585640e9
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
ghcr.io/kube-vip/kube-vip:v0.8.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0717 10:32:54.966267 3508 cache_images.go:84] Images are preloaded, skipping loading
I0717 10:32:54.966280 3508 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.2 docker true true} ...
I0717 10:32:54.966352 3508 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
[Install]
config:
{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0717 10:32:54.966420 3508 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0717 10:32:54.987201 3508 cni.go:84] Creating CNI manager for ""
I0717 10:32:54.987214 3508 cni.go:136] multinode detected (4 nodes found), recommending kindnet
I0717 10:32:54.987234 3508 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0717 10:32:54.987251 3508 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-572000 NodeName:ha-572000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0717 10:32:54.987337 3508 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.169.0.5
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ha-572000"
kubeletExtraArgs:
node-ip: 192.169.0.5
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0717 10:32:54.987354 3508 kube-vip.go:115] generating kube-vip config ...
I0717 10:32:54.987400 3508 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0717 10:32:54.999700 3508 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0717 10:32:54.999787 3508 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.169.0.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.0
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0717 10:32:54.999838 3508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
I0717 10:32:55.007455 3508 binaries.go:44] Found k8s binaries, skipping transfer
I0717 10:32:55.007500 3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
I0717 10:32:55.014894 3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
I0717 10:32:55.028112 3508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0717 10:32:55.043389 3508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
I0717 10:32:55.057830 3508 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
I0717 10:32:55.071316 3508 ssh_runner.go:195] Run: grep 192.169.0.254 control-plane.minikube.internal$ /etc/hosts
I0717 10:32:55.074184 3508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0717 10:32:55.083466 3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 10:32:55.183439 3508 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0717 10:32:55.197167 3508 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.5
I0717 10:32:55.197180 3508 certs.go:194] generating shared ca certs ...
I0717 10:32:55.197190 3508 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 10:32:55.197338 3508 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
I0717 10:32:55.197396 3508 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
I0717 10:32:55.197406 3508 certs.go:256] generating profile certs ...
I0717 10:32:55.197495 3508 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key
I0717 10:32:55.197518 3508 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7
I0717 10:32:55.197535 3508 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
I0717 10:32:55.361955 3508 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7 ...
I0717 10:32:55.361972 3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7: {Name:mk29664a7594975eea689d2f8ed48fdc71e62969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 10:32:55.362392 3508 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7 ...
I0717 10:32:55.362403 3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7: {Name:mk57740b7d279f3d01c1e4241799a0ef5b1e79c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 10:32:55.362628 3508 certs.go:381] copying /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7 -> /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt
I0717 10:32:55.362825 3508 certs.go:385] copying /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7 -> /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key
I0717 10:32:55.363038 3508 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key
I0717 10:32:55.363048 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0717 10:32:55.363071 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0717 10:32:55.363089 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0717 10:32:55.363110 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0717 10:32:55.363127 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0717 10:32:55.363144 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0717 10:32:55.363163 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0717 10:32:55.363191 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0717 10:32:55.363269 3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
W0717 10:32:55.363307 3508 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
I0717 10:32:55.363315 3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
I0717 10:32:55.363344 3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
I0717 10:32:55.363373 3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
I0717 10:32:55.363400 3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
I0717 10:32:55.363474 3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
I0717 10:32:55.363509 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0717 10:32:55.363530 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
I0717 10:32:55.363548 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
I0717 10:32:55.363978 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0717 10:32:55.392580 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0717 10:32:55.424360 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0717 10:32:55.448923 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0717 10:32:55.478217 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
I0717 10:32:55.513430 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0717 10:32:55.570074 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0717 10:32:55.603052 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0717 10:32:55.623021 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0717 10:32:55.641658 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
I0717 10:32:55.661447 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
I0717 10:32:55.681020 3508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0717 10:32:55.694280 3508 ssh_runner.go:195] Run: openssl version
I0717 10:32:55.698669 3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0717 10:32:55.707011 3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0717 10:32:55.710297 3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
I0717 10:32:55.710338 3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0717 10:32:55.714541 3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0717 10:32:55.722665 3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
I0717 10:32:55.730951 3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
I0717 10:32:55.734212 3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
I0717 10:32:55.734256 3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
I0717 10:32:55.738428 3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
I0717 10:32:55.746621 3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
I0717 10:32:55.754849 3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
I0717 10:32:55.758298 3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
I0717 10:32:55.758341 3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
I0717 10:32:55.762565 3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
I0717 10:32:55.770829 3508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0717 10:32:55.774715 3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0717 10:32:55.780174 3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0717 10:32:55.784640 3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0717 10:32:55.789061 3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0717 10:32:55.793372 3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0717 10:32:55.797672 3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0717 10:32:55.802149 3508 kubeadm.go:392] StartCluster: {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 10:32:55.802263 3508 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0717 10:32:55.813831 3508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0717 10:32:55.821229 3508 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0717 10:32:55.821245 3508 kubeadm.go:593] restartPrimaryControlPlane start ...
I0717 10:32:55.821296 3508 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0717 10:32:55.828842 3508 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0717 10:32:55.829172 3508 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-572000" does not appear in /Users/jenkins/minikube-integration/19283-1099/kubeconfig
I0717 10:32:55.829253 3508 kubeconfig.go:62] /Users/jenkins/minikube-integration/19283-1099/kubeconfig needs updating (will repair): [kubeconfig missing "ha-572000" cluster setting kubeconfig missing "ha-572000" context setting]
I0717 10:32:55.829432 3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 10:32:55.829834 3508 loader.go:395] Config loaded from file: /Users/jenkins/minikube-integration/19283-1099/kubeconfig
I0717 10:32:55.830028 3508 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x71e8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0717 10:32:55.830325 3508 cert_rotation.go:137] Starting client certificate rotation controller
I0717 10:32:55.830504 3508 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0717 10:32:55.837614 3508 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
I0717 10:32:55.837631 3508 kubeadm.go:597] duration metric: took 16.382346ms to restartPrimaryControlPlane
I0717 10:32:55.837636 3508 kubeadm.go:394] duration metric: took 35.493194ms to StartCluster
I0717 10:32:55.837647 3508 settings.go:142] acquiring lock: {Name:mkc45f011a907c66e2dbca7dadfff37ab48f7d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 10:32:55.837726 3508 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/19283-1099/kubeconfig
I0717 10:32:55.838160 3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 10:32:55.838398 3508 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0717 10:32:55.838411 3508 start.go:241] waiting for startup goroutines ...
I0717 10:32:55.838425 3508 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0717 10:32:55.838529 3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:32:55.881476 3508 out.go:177] * Enabled addons:
I0717 10:32:55.902556 3508 addons.go:510] duration metric: took 64.135812ms for enable addons: enabled=[]
I0717 10:32:55.902605 3508 start.go:246] waiting for cluster config update ...
I0717 10:32:55.902617 3508 start.go:255] writing updated cluster config ...
I0717 10:32:55.924553 3508 out.go:177]
I0717 10:32:55.945720 3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:32:55.945818 3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
I0717 10:32:55.967938 3508 out.go:177] * Starting "ha-572000-m02" control-plane node in "ha-572000" cluster
I0717 10:32:56.010383 3508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0717 10:32:56.010417 3508 cache.go:56] Caching tarball of preloaded images
I0717 10:32:56.010593 3508 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0717 10:32:56.010613 3508 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0717 10:32:56.010735 3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
I0717 10:32:56.011714 3508 start.go:360] acquireMachinesLock for ha-572000-m02: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0717 10:32:56.011815 3508 start.go:364] duration metric: took 76.983µs to acquireMachinesLock for "ha-572000-m02"
I0717 10:32:56.011840 3508 start.go:96] Skipping create...Using existing machine configuration
I0717 10:32:56.011849 3508 fix.go:54] fixHost starting: m02
I0717 10:32:56.012268 3508 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0717 10:32:56.012290 3508 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0717 10:32:56.021749 3508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51769
I0717 10:32:56.022134 3508 main.go:141] libmachine: () Calling .GetVersion
I0717 10:32:56.022452 3508 main.go:141] libmachine: Using API Version 1
I0717 10:32:56.022466 3508 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 10:32:56.022707 3508 main.go:141] libmachine: () Calling .GetMachineName
I0717 10:32:56.022831 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
I0717 10:32:56.022920 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetState
I0717 10:32:56.023010 3508 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0717 10:32:56.023088 3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3461
I0717 10:32:56.024015 3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3461 missing from process table
I0717 10:32:56.024031 3508 fix.go:112] recreateIfNeeded on ha-572000-m02: state=Stopped err=<nil>
I0717 10:32:56.024040 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
W0717 10:32:56.024134 3508 fix.go:138] unexpected machine state, will restart: <nil>
I0717 10:32:56.066377 3508 out.go:177] * Restarting existing hyperkit VM for "ha-572000-m02" ...
I0717 10:32:56.087674 3508 main.go:141] libmachine: (ha-572000-m02) Calling .Start
I0717 10:32:56.087950 3508 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0717 10:32:56.087999 3508 main.go:141] libmachine: (ha-572000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid
I0717 10:32:56.089806 3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3461 missing from process table
I0717 10:32:56.089821 3508 main.go:141] libmachine: (ha-572000-m02) DBG | pid 3461 is in state "Stopped"
I0717 10:32:56.089839 3508 main.go:141] libmachine: (ha-572000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid...
I0717 10:32:56.090122 3508 main.go:141] libmachine: (ha-572000-m02) DBG | Using UUID b5da5881-83da-4916-aec8-9a96c30c8c05
I0717 10:32:56.117133 3508 main.go:141] libmachine: (ha-572000-m02) DBG | Generated MAC 2:60:33:0:68:8b
I0717 10:32:56.117180 3508 main.go:141] libmachine: (ha-572000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
I0717 10:32:56.117265 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0717 10:32:56.117293 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0717 10:32:56.117357 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b5da5881-83da-4916-aec8-9a96c30c8c05", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machine
s/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
I0717 10:32:56.117402 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b5da5881-83da-4916-aec8-9a96c30c8c05 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
I0717 10:32:56.117418 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0717 10:32:56.118762 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Pid is 3526
I0717 10:32:56.119239 3508 main.go:141] libmachine: (ha-572000-m02) DBG | Attempt 0
I0717 10:32:56.119252 3508 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0717 10:32:56.119326 3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3526
I0717 10:32:56.121158 3508 main.go:141] libmachine: (ha-572000-m02) DBG | Searching for 2:60:33:0:68:8b in /var/db/dhcpd_leases ...
I0717 10:32:56.121244 3508 main.go:141] libmachine: (ha-572000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
I0717 10:32:56.121275 3508 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669951be}
I0717 10:32:56.121292 3508 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
I0717 10:32:56.121303 3508 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x6699517a}
I0717 10:32:56.121311 3508 main.go:141] libmachine: (ha-572000-m02) DBG | Found match: 2:60:33:0:68:8b
I0717 10:32:56.121322 3508 main.go:141] libmachine: (ha-572000-m02) DBG | IP: 192.169.0.6
I0717 10:32:56.121381 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetConfigRaw
I0717 10:32:56.122119 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
I0717 10:32:56.122366 3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
I0717 10:32:56.122967 3508 machine.go:94] provisionDockerMachine start ...
I0717 10:32:56.122978 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
I0717 10:32:56.123097 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:32:56.123191 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:32:56.123279 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:32:56.123377 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:32:56.123509 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:32:56.123686 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:32:56.123860 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.6 22 <nil> <nil>}
I0717 10:32:56.123869 3508 main.go:141] libmachine: About to run SSH command:
hostname
I0717 10:32:56.127424 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0717 10:32:56.136905 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0717 10:32:56.138099 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0717 10:32:56.138119 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0717 10:32:56.138127 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0717 10:32:56.138133 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0717 10:32:56.517427 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0717 10:32:56.517452 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0717 10:32:56.632129 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0717 10:32:56.632146 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0717 10:32:56.632154 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0717 10:32:56.632161 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0717 10:32:56.632978 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0717 10:32:56.632987 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0717 10:33:01.882277 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0717 10:33:01.882372 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0717 10:33:01.882381 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0717 10:33:01.905950 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
I0717 10:33:07.183510 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0717 10:33:07.183524 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
I0717 10:33:07.183678 3508 buildroot.go:166] provisioning hostname "ha-572000-m02"
I0717 10:33:07.183687 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
I0717 10:33:07.183789 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:07.183881 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:07.183992 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.184084 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.184179 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:07.184316 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:33:07.184458 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.6 22 <nil> <nil>}
I0717 10:33:07.184466 3508 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-572000-m02 && echo "ha-572000-m02" | sudo tee /etc/hostname
I0717 10:33:07.250039 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000-m02
I0717 10:33:07.250065 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:07.250206 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:07.250287 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.250390 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.250483 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:07.250636 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:33:07.250802 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.6 22 <nil> <nil>}
I0717 10:33:07.250815 3508 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-572000-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000-m02/g' /etc/hosts;
else
echo '127.0.1.1 ha-572000-m02' | sudo tee -a /etc/hosts;
fi
fi
I0717 10:33:07.311401 3508 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0717 10:33:07.311420 3508 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
I0717 10:33:07.311431 3508 buildroot.go:174] setting up certificates
I0717 10:33:07.311441 3508 provision.go:84] configureAuth start
I0717 10:33:07.311448 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
I0717 10:33:07.311593 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
I0717 10:33:07.311680 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:07.311768 3508 provision.go:143] copyHostCerts
I0717 10:33:07.311797 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
I0717 10:33:07.311852 3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
I0717 10:33:07.311858 3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
I0717 10:33:07.312271 3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
I0717 10:33:07.312505 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
I0717 10:33:07.312536 3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
I0717 10:33:07.312541 3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
I0717 10:33:07.312619 3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
I0717 10:33:07.312779 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
I0717 10:33:07.312811 3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
I0717 10:33:07.312816 3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
I0717 10:33:07.312912 3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
I0717 10:33:07.313069 3508 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000-m02 san=[127.0.0.1 192.169.0.6 ha-572000-m02 localhost minikube]
I0717 10:33:07.375154 3508 provision.go:177] copyRemoteCerts
I0717 10:33:07.375212 3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0717 10:33:07.375227 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:07.375382 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:07.375473 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.375558 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:07.375656 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
I0717 10:33:07.409433 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0717 10:33:07.409505 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0717 10:33:07.429479 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
I0717 10:33:07.429539 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0717 10:33:07.451163 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0717 10:33:07.451231 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0717 10:33:07.471509 3508 provision.go:87] duration metric: took 160.057268ms to configureAuth
I0717 10:33:07.471523 3508 buildroot.go:189] setting minikube options for container-runtime
I0717 10:33:07.471702 3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:33:07.471715 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
I0717 10:33:07.471860 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:07.471964 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:07.472045 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.472140 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.472216 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:07.472319 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:33:07.472438 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.6 22 <nil> <nil>}
I0717 10:33:07.472446 3508 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0717 10:33:07.526742 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0717 10:33:07.526766 3508 buildroot.go:70] root file system type: tmpfs
I0717 10:33:07.526848 3508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0717 10:33:07.526860 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:07.526992 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:07.527094 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.527175 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.527248 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:07.527375 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:33:07.527510 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.6 22 <nil> <nil>}
I0717 10:33:07.527555 3508 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.169.0.5"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0717 10:33:07.594480 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.169.0.5
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0717 10:33:07.594502 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:07.594640 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:07.594720 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.594808 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.594894 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:07.595019 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:33:07.595164 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.6 22 <nil> <nil>}
I0717 10:33:07.595178 3508 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0717 10:33:09.291500 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0717 10:33:09.291515 3508 machine.go:97] duration metric: took 13.164785942s to provisionDockerMachine
I0717 10:33:09.291524 3508 start.go:293] postStartSetup for "ha-572000-m02" (driver="hyperkit")
I0717 10:33:09.291531 3508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0717 10:33:09.291546 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
I0717 10:33:09.291729 3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0717 10:33:09.291743 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:09.291855 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:09.291956 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:09.292049 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:09.292155 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
I0717 10:33:09.335381 3508 ssh_runner.go:195] Run: cat /etc/os-release
I0717 10:33:09.338532 3508 info.go:137] Remote host: Buildroot 2023.02.9
I0717 10:33:09.338541 3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
I0717 10:33:09.338631 3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
I0717 10:33:09.338771 3508 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
I0717 10:33:09.338778 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
I0717 10:33:09.338937 3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0717 10:33:09.346285 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
I0717 10:33:09.366379 3508 start.go:296] duration metric: took 74.672934ms for postStartSetup
I0717 10:33:09.366399 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
I0717 10:33:09.366579 3508 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0717 10:33:09.366592 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:09.366681 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:09.366764 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:09.366841 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:09.366910 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
I0717 10:33:09.399615 3508 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0717 10:33:09.399679 3508 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0717 10:33:09.453746 3508 fix.go:56] duration metric: took 13.437754461s for fixHost
I0717 10:33:09.453771 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:09.453917 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:09.454023 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:09.454133 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:09.454219 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:09.454344 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:33:09.454500 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.6 22 <nil> <nil>}
I0717 10:33:09.454509 3508 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0717 10:33:09.507516 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237589.628548940
I0717 10:33:09.507529 3508 fix.go:216] guest clock: 1721237589.628548940
I0717 10:33:09.507535 3508 fix.go:229] Guest: 2024-07-17 10:33:09.62854894 -0700 PDT Remote: 2024-07-17 10:33:09.453761 -0700 PDT m=+32.267325038 (delta=174.78794ms)
I0717 10:33:09.507545 3508 fix.go:200] guest clock delta is within tolerance: 174.78794ms
I0717 10:33:09.507551 3508 start.go:83] releasing machines lock for "ha-572000-m02", held for 13.491465012s
I0717 10:33:09.507572 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
I0717 10:33:09.507699 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
I0717 10:33:09.532514 3508 out.go:177] * Found network options:
I0717 10:33:09.552891 3508 out.go:177] - NO_PROXY=192.169.0.5
W0717 10:33:09.574387 3508 proxy.go:119] fail to check proxy env: Error ip not in block
I0717 10:33:09.574424 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
I0717 10:33:09.575230 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
I0717 10:33:09.575434 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
I0717 10:33:09.575533 3508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0717 10:33:09.575579 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
W0717 10:33:09.575674 3508 proxy.go:119] fail to check proxy env: Error ip not in block
I0717 10:33:09.575742 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:09.575769 3508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0717 10:33:09.575787 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:09.575982 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:09.576003 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:09.576234 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:09.576305 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:09.576479 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:09.576483 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
I0717 10:33:09.576596 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
W0717 10:33:09.607732 3508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0717 10:33:09.607792 3508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0717 10:33:09.656923 3508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0717 10:33:09.656940 3508 start.go:495] detecting cgroup driver to use...
I0717 10:33:09.657029 3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0717 10:33:09.673202 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0717 10:33:09.682149 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0717 10:33:09.691293 3508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0717 10:33:09.691348 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0717 10:33:09.700430 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 10:33:09.709231 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0717 10:33:09.718168 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 10:33:09.727036 3508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0717 10:33:09.736298 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0717 10:33:09.745642 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0717 10:33:09.754690 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0717 10:33:09.763621 3508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0717 10:33:09.771717 3508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0717 10:33:09.779861 3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 10:33:09.883183 3508 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0717 10:33:09.901989 3508 start.go:495] detecting cgroup driver to use...
I0717 10:33:09.902056 3508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0717 10:33:09.919371 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0717 10:33:09.932597 3508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0717 10:33:09.953462 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0717 10:33:09.964583 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0717 10:33:09.975437 3508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0717 10:33:09.995754 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0717 10:33:10.006015 3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0717 10:33:10.020825 3508 ssh_runner.go:195] Run: which cri-dockerd
I0717 10:33:10.023692 3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0717 10:33:10.030648 3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0717 10:33:10.044228 3508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0717 10:33:10.141170 3508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0717 10:33:10.249186 3508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0717 10:33:10.249214 3508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0717 10:33:10.263041 3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 10:33:10.359716 3508 ssh_runner.go:195] Run: sudo systemctl restart docker
I0717 10:34:11.416224 3508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.021941021s)
I0717 10:34:11.416300 3508 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0717 10:34:11.450835 3508 out.go:177]
W0717 10:34:11.471671 3508 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
Jul 17 17:33:08 ha-572000-m02 systemd[1]: Starting Docker Application Container Engine...
Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.044876852Z" level=info msg="Starting up"
Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.045449556Z" level=info msg="containerd not running, starting managed containerd"
Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.049003475Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=496
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.064081003Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079222179Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079310364Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079376764Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079411371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079557600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079609621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079752864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079797312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079887739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079928799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.080046807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.080239575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.081923027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.081977822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082123136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082166838Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082275842Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082332754Z" level=info msg="metadata content store policy set" policy=shared
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084273060Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084339651Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084378389Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084411359Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084442922Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084509418Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084664339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084738339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084774254Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084804627Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084874943Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084911894Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084942267Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084972768Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085003365Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085032856Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085062302Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085090775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085129743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085161980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085192066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085224112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085253798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085286177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085315810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085345112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085374976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085410351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085440979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085471089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085500214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085532017Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085571085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085603089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085635203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085683933Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085717630Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085747936Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085777505Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085805608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085834007Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085861655Z" level=info msg="NRI interface is disabled by configuration."
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086142807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086206245Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086259095Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086322237Z" level=info msg="containerd successfully booted in 0.022994s"
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.065923436Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.108166477Z" level=info msg="Loading containers: start."
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.277192209Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.336888641Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.380874805Z" level=info msg="Loading containers: done."
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.387565385Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.387757279Z" level=info msg="Daemon has completed initialization"
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.411794010Z" level=info msg="API listen on /var/run/docker.sock"
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.411982753Z" level=info msg="API listen on [::]:2376"
Jul 17 17:33:09 ha-572000-m02 systemd[1]: Started Docker Application Container Engine.
Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.490943827Z" level=info msg="Processing signal 'terminated'"
Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.491923813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.491997518Z" level=info msg="Daemon shutdown complete"
Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.492029261Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.492040420Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jul 17 17:33:10 ha-572000-m02 systemd[1]: Stopping Docker Application Container Engine...
Jul 17 17:33:11 ha-572000-m02 systemd[1]: docker.service: Deactivated successfully.
Jul 17 17:33:11 ha-572000-m02 systemd[1]: Stopped Docker Application Container Engine.
Jul 17 17:33:11 ha-572000-m02 systemd[1]: Starting Docker Application Container Engine...
Jul 17 17:33:11 ha-572000-m02 dockerd[1164]: time="2024-07-17T17:33:11.528450348Z" level=info msg="Starting up"
Jul 17 17:34:11 ha-572000-m02 dockerd[1164]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 17 17:34:11 ha-572000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 17 17:34:11 ha-572000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 17 17:34:11 ha-572000-m02 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
Jul 17 17:33:08 ha-572000-m02 systemd[1]: Starting Docker Application Container Engine...
Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.044876852Z" level=info msg="Starting up"
Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.045449556Z" level=info msg="containerd not running, starting managed containerd"
Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.049003475Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=496
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.064081003Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079222179Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079310364Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079376764Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079411371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079557600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079609621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079752864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079797312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079887739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079928799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.080046807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.080239575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.081923027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.081977822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082123136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082166838Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082275842Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082332754Z" level=info msg="metadata content store policy set" policy=shared
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084273060Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084339651Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084378389Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084411359Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084442922Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084509418Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084664339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084738339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084774254Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084804627Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084874943Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084911894Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084942267Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084972768Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085003365Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085032856Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085062302Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085090775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085129743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085161980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085192066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085224112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085253798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085286177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085315810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085345112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085374976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085410351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085440979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085471089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085500214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085532017Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085571085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085603089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085635203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085683933Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085717630Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085747936Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085777505Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085805608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085834007Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085861655Z" level=info msg="NRI interface is disabled by configuration."
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086142807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086206245Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086259095Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086322237Z" level=info msg="containerd successfully booted in 0.022994s"
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.065923436Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.108166477Z" level=info msg="Loading containers: start."
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.277192209Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.336888641Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.380874805Z" level=info msg="Loading containers: done."
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.387565385Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.387757279Z" level=info msg="Daemon has completed initialization"
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.411794010Z" level=info msg="API listen on /var/run/docker.sock"
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.411982753Z" level=info msg="API listen on [::]:2376"
Jul 17 17:33:09 ha-572000-m02 systemd[1]: Started Docker Application Container Engine.
Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.490943827Z" level=info msg="Processing signal 'terminated'"
Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.491923813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.491997518Z" level=info msg="Daemon shutdown complete"
Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.492029261Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.492040420Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jul 17 17:33:10 ha-572000-m02 systemd[1]: Stopping Docker Application Container Engine...
Jul 17 17:33:11 ha-572000-m02 systemd[1]: docker.service: Deactivated successfully.
Jul 17 17:33:11 ha-572000-m02 systemd[1]: Stopped Docker Application Container Engine.
Jul 17 17:33:11 ha-572000-m02 systemd[1]: Starting Docker Application Container Engine...
Jul 17 17:33:11 ha-572000-m02 dockerd[1164]: time="2024-07-17T17:33:11.528450348Z" level=info msg="Starting up"
Jul 17 17:34:11 ha-572000-m02 dockerd[1164]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 17 17:34:11 ha-572000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 17 17:34:11 ha-572000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 17 17:34:11 ha-572000-m02 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
W0717 10:34:11.471802 3508 out.go:239] *
*
W0717 10:34:11.473037 3508 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0717 10:34:11.536857 3508 out.go:177]
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p ha-572000 -v=7 --alsologtostderr" : exit status 90
ha_test.go:472: (dbg) Run: out/minikube-darwin-amd64 node list -p ha-572000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-572000 -n ha-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-572000 -n ha-572000: exit status 2 (163.820356ms)
-- stdout --
Running
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-darwin-amd64 -p ha-572000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-572000 logs -n 25: (2.233321155s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs:
-- stdout --
==> Audit <==
|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
| cp | ha-572000 cp ha-572000-m03:/home/docker/cp-test.txt | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | ha-572000-m02:/home/docker/cp-test_ha-572000-m03_ha-572000-m02.txt | | | | | |
| ssh | ha-572000 ssh -n | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | ha-572000-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-572000 ssh -n ha-572000-m02 sudo cat | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | /home/docker/cp-test_ha-572000-m03_ha-572000-m02.txt | | | | | |
| cp | ha-572000 cp ha-572000-m03:/home/docker/cp-test.txt | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | ha-572000-m04:/home/docker/cp-test_ha-572000-m03_ha-572000-m04.txt | | | | | |
| ssh | ha-572000 ssh -n | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | ha-572000-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-572000 ssh -n ha-572000-m04 sudo cat | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | /home/docker/cp-test_ha-572000-m03_ha-572000-m04.txt | | | | | |
| cp | ha-572000 cp testdata/cp-test.txt | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | ha-572000-m04:/home/docker/cp-test.txt | | | | | |
| ssh | ha-572000 ssh -n | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | ha-572000-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile3497274976/001/cp-test_ha-572000-m04.txt | | | | | |
| ssh | ha-572000 ssh -n | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | ha-572000-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | ha-572000:/home/docker/cp-test_ha-572000-m04_ha-572000.txt | | | | | |
| ssh | ha-572000 ssh -n | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | ha-572000-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-572000 ssh -n ha-572000 sudo cat | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | /home/docker/cp-test_ha-572000-m04_ha-572000.txt | | | | | |
| cp | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | ha-572000-m02:/home/docker/cp-test_ha-572000-m04_ha-572000-m02.txt | | | | | |
| ssh | ha-572000 ssh -n | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | ha-572000-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-572000 ssh -n ha-572000-m02 sudo cat | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | /home/docker/cp-test_ha-572000-m04_ha-572000-m02.txt | | | | | |
| cp | ha-572000 cp ha-572000-m04:/home/docker/cp-test.txt | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | ha-572000-m03:/home/docker/cp-test_ha-572000-m04_ha-572000-m03.txt | | | | | |
| ssh | ha-572000 ssh -n | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | ha-572000-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-572000 ssh -n ha-572000-m03 sudo cat | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | /home/docker/cp-test_ha-572000-m04_ha-572000-m03.txt | | | | | |
| node | ha-572000 node stop m02 -v=7 | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:31 PDT |
| | --alsologtostderr | | | | | |
| node | ha-572000 node start m02 -v=7 | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:31 PDT | 17 Jul 24 10:32 PDT |
| | --alsologtostderr | | | | | |
| node | list -p ha-572000 -v=7 | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT | |
| | --alsologtostderr | | | | | |
| stop | -p ha-572000 -v=7 | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT | 17 Jul 24 10:32 PDT |
| | --alsologtostderr | | | | | |
| start | -p ha-572000 --wait=true -v=7 | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:32 PDT | |
| | --alsologtostderr | | | | | |
| node | list -p ha-572000 | ha-572000 | jenkins | v1.33.1 | 17 Jul 24 10:34 PDT | |
|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/07/17 10:32:37
Running on machine: MacOS-Agent-4
Binary: Built with gc go1.22.5 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0717 10:32:37.218202 3508 out.go:291] Setting OutFile to fd 1 ...
I0717 10:32:37.218482 3508 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:32:37.218488 3508 out.go:304] Setting ErrFile to fd 2...
I0717 10:32:37.218492 3508 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 10:32:37.218678 3508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19283-1099/.minikube/bin
I0717 10:32:37.220111 3508 out.go:298] Setting JSON to false
I0717 10:32:37.243881 3508 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1928,"bootTime":1721235629,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
W0717 10:32:37.243971 3508 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0717 10:32:37.265852 3508 out.go:177] * [ha-572000] minikube v1.33.1 on Darwin 14.5
I0717 10:32:37.307717 3508 out.go:177] - MINIKUBE_LOCATION=19283
I0717 10:32:37.307783 3508 notify.go:220] Checking for updates...
I0717 10:32:37.352082 3508 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/19283-1099/kubeconfig
I0717 10:32:37.394723 3508 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0717 10:32:37.416561 3508 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0717 10:32:37.437566 3508 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19283-1099/.minikube
I0717 10:32:37.458758 3508 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0717 10:32:37.480259 3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:32:37.480391 3508 driver.go:392] Setting default libvirt URI to qemu:///system
I0717 10:32:37.481074 3508 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0717 10:32:37.481147 3508 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0717 10:32:37.491120 3508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51745
I0717 10:32:37.491492 3508 main.go:141] libmachine: () Calling .GetVersion
I0717 10:32:37.491919 3508 main.go:141] libmachine: Using API Version 1
I0717 10:32:37.491928 3508 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 10:32:37.492189 3508 main.go:141] libmachine: () Calling .GetMachineName
I0717 10:32:37.492307 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:37.520549 3508 out.go:177] * Using the hyperkit driver based on existing profile
I0717 10:32:37.563535 3508 start.go:297] selected driver: hyperkit
I0717 10:32:37.563555 3508 start.go:901] validating driver "hyperkit" against &{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 10:32:37.563770 3508 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0717 10:32:37.563903 3508 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0717 10:32:37.564063 3508 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19283-1099/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0717 10:32:37.572774 3508 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
I0717 10:32:37.578697 3508 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0717 10:32:37.578722 3508 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0717 10:32:37.582004 3508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0717 10:32:37.582058 3508 cni.go:84] Creating CNI manager for ""
I0717 10:32:37.582066 3508 cni.go:136] multinode detected (4 nodes found), recommending kindnet
I0717 10:32:37.582150 3508 start.go:340] cluster config:
{Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 10:32:37.582277 3508 iso.go:125] acquiring lock: {Name:mkf51f842bcc8a77e9c7c50d642c4c76848e96af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0717 10:32:37.624644 3508 out.go:177] * Starting "ha-572000" primary control-plane node in "ha-572000" cluster
I0717 10:32:37.645662 3508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0717 10:32:37.645750 3508 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
I0717 10:32:37.645778 3508 cache.go:56] Caching tarball of preloaded images
I0717 10:32:37.645983 3508 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0717 10:32:37.646002 3508 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0717 10:32:37.646175 3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
I0717 10:32:37.647084 3508 start.go:360] acquireMachinesLock for ha-572000: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0717 10:32:37.647209 3508 start.go:364] duration metric: took 99.885µs to acquireMachinesLock for "ha-572000"
I0717 10:32:37.647240 3508 start.go:96] Skipping create...Using existing machine configuration
I0717 10:32:37.647261 3508 fix.go:54] fixHost starting:
I0717 10:32:37.647673 3508 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0717 10:32:37.647700 3508 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0717 10:32:37.656651 3508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51747
I0717 10:32:37.657021 3508 main.go:141] libmachine: () Calling .GetVersion
I0717 10:32:37.657336 3508 main.go:141] libmachine: Using API Version 1
I0717 10:32:37.657346 3508 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 10:32:37.657590 3508 main.go:141] libmachine: () Calling .GetMachineName
I0717 10:32:37.657719 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:37.657832 3508 main.go:141] libmachine: (ha-572000) Calling .GetState
I0717 10:32:37.657936 3508 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0717 10:32:37.658021 3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 2926
I0717 10:32:37.658989 3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 2926 missing from process table
I0717 10:32:37.658986 3508 fix.go:112] recreateIfNeeded on ha-572000: state=Stopped err=<nil>
I0717 10:32:37.659004 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
W0717 10:32:37.659109 3508 fix.go:138] unexpected machine state, will restart: <nil>
I0717 10:32:37.701727 3508 out.go:177] * Restarting existing hyperkit VM for "ha-572000" ...
I0717 10:32:37.722485 3508 main.go:141] libmachine: (ha-572000) Calling .Start
I0717 10:32:37.722730 3508 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0717 10:32:37.722799 3508 main.go:141] libmachine: (ha-572000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid
I0717 10:32:37.724830 3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid 2926 missing from process table
I0717 10:32:37.724872 3508 main.go:141] libmachine: (ha-572000) DBG | pid 2926 is in state "Stopped"
I0717 10:32:37.724889 3508 main.go:141] libmachine: (ha-572000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid...
I0717 10:32:37.725226 3508 main.go:141] libmachine: (ha-572000) DBG | Using UUID 5f2666de-0b32-4258-9840-7856c1bd4173
I0717 10:32:37.837447 3508 main.go:141] libmachine: (ha-572000) DBG | Generated MAC d2:a6:10:ad:80:98
I0717 10:32:37.837476 3508 main.go:141] libmachine: (ha-572000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
I0717 10:32:37.837593 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0717 10:32:37.837631 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f2666de-0b32-4258-9840-7856c1bd4173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0717 10:32:37.837679 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5f2666de-0b32-4258-9840-7856c1bd4173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
I0717 10:32:37.837720 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5f2666de-0b32-4258-9840-7856c1bd4173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/ha-572000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
I0717 10:32:37.837736 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0717 10:32:37.839166 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 DEBUG: hyperkit: Pid is 3521
I0717 10:32:37.839653 3508 main.go:141] libmachine: (ha-572000) DBG | Attempt 0
I0717 10:32:37.839674 3508 main.go:141] libmachine: (ha-572000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0717 10:32:37.839714 3508 main.go:141] libmachine: (ha-572000) DBG | hyperkit pid from json: 3521
I0717 10:32:37.841412 3508 main.go:141] libmachine: (ha-572000) DBG | Searching for d2:a6:10:ad:80:98 in /var/db/dhcpd_leases ...
I0717 10:32:37.841498 3508 main.go:141] libmachine: (ha-572000) DBG | Found 7 entries in /var/db/dhcpd_leases!
I0717 10:32:37.841515 3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
I0717 10:32:37.841527 3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x6699517a}
I0717 10:32:37.841536 3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:6e:d3:62:da:43:cf ID:1,6e:d3:62:da:43:cf Lease:0x669950e4}
I0717 10:32:37.841559 3508 main.go:141] libmachine: (ha-572000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x66994ff6}
I0717 10:32:37.841570 3508 main.go:141] libmachine: (ha-572000) DBG | Found match: d2:a6:10:ad:80:98
I0717 10:32:37.841595 3508 main.go:141] libmachine: (ha-572000) DBG | IP: 192.169.0.5
I0717 10:32:37.841705 3508 main.go:141] libmachine: (ha-572000) Calling .GetConfigRaw
I0717 10:32:37.842357 3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
I0717 10:32:37.842580 3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
I0717 10:32:37.843052 3508 machine.go:94] provisionDockerMachine start ...
I0717 10:32:37.843065 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:37.843201 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:37.843303 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:37.843420 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:37.843572 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:37.843663 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:37.843791 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:32:37.844002 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0717 10:32:37.844014 3508 main.go:141] libmachine: About to run SSH command:
hostname
I0717 10:32:37.847060 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0717 10:32:37.898878 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0717 10:32:37.899633 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0717 10:32:37.899658 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0717 10:32:37.899668 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0717 10:32:37.899678 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0717 10:32:38.277909 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0717 10:32:38.277922 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0717 10:32:38.392613 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0717 10:32:38.392633 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0717 10:32:38.392644 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0717 10:32:38.392676 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0717 10:32:38.393519 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0717 10:32:38.393530 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:38 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0717 10:32:43.648108 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0717 10:32:43.648154 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0717 10:32:43.648161 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0717 10:32:43.672680 3508 main.go:141] libmachine: (ha-572000) DBG | 2024/07/17 10:32:43 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
I0717 10:32:48.904402 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0717 10:32:48.904418 3508 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
I0717 10:32:48.904582 3508 buildroot.go:166] provisioning hostname "ha-572000"
I0717 10:32:48.904593 3508 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
I0717 10:32:48.904692 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:48.904776 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:48.904887 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:48.904976 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:48.905073 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:48.905225 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:32:48.905383 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0717 10:32:48.905392 3508 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-572000 && echo "ha-572000" | sudo tee /etc/hostname
I0717 10:32:48.967564 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000
I0717 10:32:48.967584 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:48.967740 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:48.967836 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:48.967934 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:48.968014 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:48.968132 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:32:48.968282 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0717 10:32:48.968293 3508 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-572000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000/g' /etc/hosts;
else
echo '127.0.1.1 ha-572000' | sudo tee -a /etc/hosts;
fi
fi
I0717 10:32:49.026313 3508 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0717 10:32:49.026336 3508 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
I0717 10:32:49.026353 3508 buildroot.go:174] setting up certificates
I0717 10:32:49.026367 3508 provision.go:84] configureAuth start
I0717 10:32:49.026375 3508 main.go:141] libmachine: (ha-572000) Calling .GetMachineName
I0717 10:32:49.026507 3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
I0717 10:32:49.026613 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:49.026706 3508 provision.go:143] copyHostCerts
I0717 10:32:49.026741 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
I0717 10:32:49.026811 3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
I0717 10:32:49.026819 3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
I0717 10:32:49.026972 3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
I0717 10:32:49.027200 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
I0717 10:32:49.027231 3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
I0717 10:32:49.027236 3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
I0717 10:32:49.027325 3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
I0717 10:32:49.027487 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
I0717 10:32:49.027519 3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
I0717 10:32:49.027524 3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
I0717 10:32:49.027590 3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
I0717 10:32:49.027748 3508 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000 san=[127.0.0.1 192.169.0.5 ha-572000 localhost minikube]
I0717 10:32:49.085766 3508 provision.go:177] copyRemoteCerts
I0717 10:32:49.085812 3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0717 10:32:49.085827 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:49.086112 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:49.086217 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:49.086305 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:49.086395 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
I0717 10:32:49.120573 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0717 10:32:49.120648 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0717 10:32:49.139510 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
I0717 10:32:49.139585 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0717 10:32:49.158247 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0717 10:32:49.158317 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0717 10:32:49.177520 3508 provision.go:87] duration metric: took 151.137832ms to configureAuth
I0717 10:32:49.177532 3508 buildroot.go:189] setting minikube options for container-runtime
I0717 10:32:49.177693 3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:32:49.177706 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:49.177837 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:49.177945 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:49.178031 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:49.178106 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:49.178195 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:49.178315 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:32:49.178439 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0717 10:32:49.178454 3508 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0717 10:32:49.231928 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0717 10:32:49.231939 3508 buildroot.go:70] root file system type: tmpfs
I0717 10:32:49.232011 3508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0717 10:32:49.232025 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:49.232158 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:49.232247 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:49.232341 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:49.232427 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:49.232563 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:32:49.232710 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0717 10:32:49.232755 3508 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0717 10:32:49.295280 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0717 10:32:49.295308 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:49.295446 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:49.295550 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:49.295637 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:49.295723 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:49.295852 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:32:49.295991 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0717 10:32:49.296003 3508 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0717 10:32:50.972633 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0717 10:32:50.972648 3508 machine.go:97] duration metric: took 13.129388483s to provisionDockerMachine
I0717 10:32:50.972660 3508 start.go:293] postStartSetup for "ha-572000" (driver="hyperkit")
I0717 10:32:50.972668 3508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0717 10:32:50.972678 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:50.972893 3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0717 10:32:50.972908 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:50.973007 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:50.973108 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:50.973193 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:50.973281 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
I0717 10:32:51.011765 3508 ssh_runner.go:195] Run: cat /etc/os-release
I0717 10:32:51.016752 3508 info.go:137] Remote host: Buildroot 2023.02.9
I0717 10:32:51.016768 3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
I0717 10:32:51.016865 3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
I0717 10:32:51.017004 3508 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
I0717 10:32:51.017011 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
I0717 10:32:51.017179 3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0717 10:32:51.027779 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
I0717 10:32:51.057568 3508 start.go:296] duration metric: took 84.89741ms for postStartSetup
I0717 10:32:51.057590 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:51.057768 3508 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0717 10:32:51.057780 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:51.057871 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:51.057953 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:51.058038 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:51.058120 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
I0717 10:32:51.090670 3508 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0717 10:32:51.090728 3508 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0717 10:32:51.124190 3508 fix.go:56] duration metric: took 13.476731728s for fixHost
I0717 10:32:51.124211 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:51.124344 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:51.124460 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:51.124556 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:51.124646 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:51.124769 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:32:51.124925 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.5 22 <nil> <nil>}
I0717 10:32:51.124933 3508 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0717 10:32:51.178019 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237571.303332168
I0717 10:32:51.178031 3508 fix.go:216] guest clock: 1721237571.303332168
I0717 10:32:51.178046 3508 fix.go:229] Guest: 2024-07-17 10:32:51.303332168 -0700 PDT Remote: 2024-07-17 10:32:51.124202 -0700 PDT m=+13.941974821 (delta=179.130168ms)
I0717 10:32:51.178065 3508 fix.go:200] guest clock delta is within tolerance: 179.130168ms
I0717 10:32:51.178069 3508 start.go:83] releasing machines lock for "ha-572000", held for 13.530645229s
I0717 10:32:51.178090 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:51.178220 3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
I0717 10:32:51.178321 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:51.178658 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:51.178764 3508 main.go:141] libmachine: (ha-572000) Calling .DriverName
I0717 10:32:51.178848 3508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0717 10:32:51.178881 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:51.178898 3508 ssh_runner.go:195] Run: cat /version.json
I0717 10:32:51.178911 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHHostname
I0717 10:32:51.178978 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:51.179001 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHPort
I0717 10:32:51.179061 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:51.179087 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHKeyPath
I0717 10:32:51.179158 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:51.179178 3508 main.go:141] libmachine: (ha-572000) Calling .GetSSHUsername
I0717 10:32:51.179272 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
I0717 10:32:51.179286 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000/id_rsa Username:docker}
I0717 10:32:51.214891 3508 ssh_runner.go:195] Run: systemctl --version
I0717 10:32:51.259994 3508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0717 10:32:51.264962 3508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0717 10:32:51.265002 3508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0717 10:32:51.277704 3508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0717 10:32:51.277717 3508 start.go:495] detecting cgroup driver to use...
I0717 10:32:51.277809 3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0717 10:32:51.295436 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0717 10:32:51.304332 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0717 10:32:51.313061 3508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0717 10:32:51.313115 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0717 10:32:51.321793 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 10:32:51.330506 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0717 10:32:51.339262 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 10:32:51.347997 3508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0717 10:32:51.356934 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0717 10:32:51.365798 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0717 10:32:51.374520 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0717 10:32:51.383330 3508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0717 10:32:51.391096 3508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0717 10:32:51.398988 3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 10:32:51.492043 3508 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0717 10:32:51.510670 3508 start.go:495] detecting cgroup driver to use...
I0717 10:32:51.510748 3508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0717 10:32:51.522109 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0717 10:32:51.533578 3508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0717 10:32:51.547583 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0717 10:32:51.558324 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0717 10:32:51.568495 3508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0717 10:32:51.586295 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0717 10:32:51.596174 3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0717 10:32:51.611388 3508 ssh_runner.go:195] Run: which cri-dockerd
I0717 10:32:51.614154 3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0717 10:32:51.621515 3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0717 10:32:51.636315 3508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0717 10:32:51.730805 3508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0717 10:32:51.833325 3508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0717 10:32:51.833396 3508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0717 10:32:51.849329 3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 10:32:51.950120 3508 ssh_runner.go:195] Run: sudo systemctl restart docker
I0717 10:32:54.304256 3508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.354082061s)
I0717 10:32:54.304312 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0717 10:32:54.314507 3508 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I0717 10:32:54.327160 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0717 10:32:54.337277 3508 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0717 10:32:54.428967 3508 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0717 10:32:54.528124 3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 10:32:54.629785 3508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0717 10:32:54.644492 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0717 10:32:54.655322 3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 10:32:54.750191 3508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0717 10:32:54.814687 3508 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0717 10:32:54.814779 3508 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0717 10:32:54.819517 3508 start.go:563] Will wait 60s for crictl version
I0717 10:32:54.819571 3508 ssh_runner.go:195] Run: which crictl
I0717 10:32:54.823230 3508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0717 10:32:54.848640 3508 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 27.0.3
RuntimeApiVersion: v1
I0717 10:32:54.848713 3508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0717 10:32:54.866198 3508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0717 10:32:54.925410 3508 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
I0717 10:32:54.925479 3508 main.go:141] libmachine: (ha-572000) Calling .GetIP
I0717 10:32:54.925865 3508 ssh_runner.go:195] Run: grep 192.169.0.1 host.minikube.internal$ /etc/hosts
I0717 10:32:54.930367 3508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0717 10:32:54.939983 3508 kubeadm.go:883] updating cluster {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0717 10:32:54.940088 3508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0717 10:32:54.940151 3508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0717 10:32:54.953243 3508 docker.go:685] Got preloaded images: -- stdout --
kindest/kindnetd:v20240715-585640e9
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
ghcr.io/kube-vip/kube-vip:v0.8.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0717 10:32:54.953256 3508 docker.go:615] Images already preloaded, skipping extraction
I0717 10:32:54.953343 3508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0717 10:32:54.966247 3508 docker.go:685] Got preloaded images: -- stdout --
kindest/kindnetd:v20240715-585640e9
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
ghcr.io/kube-vip/kube-vip:v0.8.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0717 10:32:54.966267 3508 cache_images.go:84] Images are preloaded, skipping loading
I0717 10:32:54.966280 3508 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.2 docker true true} ...
I0717 10:32:54.966352 3508 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-572000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
[Install]
config:
{KubernetesVersion:v1.30.2 ClusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0717 10:32:54.966420 3508 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0717 10:32:54.987201 3508 cni.go:84] Creating CNI manager for ""
I0717 10:32:54.987214 3508 cni.go:136] multinode detected (4 nodes found), recommending kindnet
I0717 10:32:54.987234 3508 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0717 10:32:54.987251 3508 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-572000 NodeName:ha-572000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0717 10:32:54.987337 3508 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.169.0.5
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ha-572000"
kubeletExtraArgs:
node-ip: 192.169.0.5
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0717 10:32:54.987354 3508 kube-vip.go:115] generating kube-vip config ...
I0717 10:32:54.987400 3508 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0717 10:32:54.999700 3508 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
I0717 10:32:54.999787 3508 kube-vip.go:137] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.169.0.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.8.0
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0717 10:32:54.999838 3508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
I0717 10:32:55.007455 3508 binaries.go:44] Found k8s binaries, skipping transfer
I0717 10:32:55.007500 3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
I0717 10:32:55.014894 3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
I0717 10:32:55.028112 3508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0717 10:32:55.043389 3508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
I0717 10:32:55.057830 3508 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
I0717 10:32:55.071316 3508 ssh_runner.go:195] Run: grep 192.169.0.254 control-plane.minikube.internal$ /etc/hosts
I0717 10:32:55.074184 3508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0717 10:32:55.083466 3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 10:32:55.183439 3508 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0717 10:32:55.197167 3508 certs.go:68] Setting up /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000 for IP: 192.169.0.5
I0717 10:32:55.197180 3508 certs.go:194] generating shared ca certs ...
I0717 10:32:55.197190 3508 certs.go:226] acquiring lock for ca certs: {Name:mk556f0034fd1398769f39242a6de33bc5cbce7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 10:32:55.197338 3508 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key
I0717 10:32:55.197396 3508 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key
I0717 10:32:55.197406 3508 certs.go:256] generating profile certs ...
I0717 10:32:55.197495 3508 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key
I0717 10:32:55.197518 3508 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7
I0717 10:32:55.197535 3508 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
I0717 10:32:55.361955 3508 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7 ...
I0717 10:32:55.361972 3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7: {Name:mk29664a7594975eea689d2f8ed48fdc71e62969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 10:32:55.362392 3508 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7 ...
I0717 10:32:55.362403 3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7: {Name:mk57740b7d279f3d01c1e4241799a0ef5b1e79c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 10:32:55.362628 3508 certs.go:381] copying /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt.8191f6c7 -> /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt
I0717 10:32:55.362825 3508 certs.go:385] copying /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key.8191f6c7 -> /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key
I0717 10:32:55.363038 3508 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key
I0717 10:32:55.363048 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0717 10:32:55.363071 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0717 10:32:55.363089 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0717 10:32:55.363110 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0717 10:32:55.363127 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0717 10:32:55.363144 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0717 10:32:55.363163 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0717 10:32:55.363191 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0717 10:32:55.363269 3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem (1338 bytes)
W0717 10:32:55.363307 3508 certs.go:480] ignoring /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639_empty.pem, impossibly tiny 0 bytes
I0717 10:32:55.363315 3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem (1679 bytes)
I0717 10:32:55.363344 3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem (1078 bytes)
I0717 10:32:55.363373 3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem (1123 bytes)
I0717 10:32:55.363400 3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem (1679 bytes)
I0717 10:32:55.363474 3508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem (1708 bytes)
I0717 10:32:55.363509 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0717 10:32:55.363530 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem -> /usr/share/ca-certificates/1639.pem
I0717 10:32:55.363548 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /usr/share/ca-certificates/16392.pem
I0717 10:32:55.363978 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0717 10:32:55.392580 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0717 10:32:55.424360 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0717 10:32:55.448923 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0717 10:32:55.478217 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
I0717 10:32:55.513430 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0717 10:32:55.570074 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0717 10:32:55.603052 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0717 10:32:55.623021 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0717 10:32:55.641658 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/1639.pem --> /usr/share/ca-certificates/1639.pem (1338 bytes)
I0717 10:32:55.661447 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /usr/share/ca-certificates/16392.pem (1708 bytes)
I0717 10:32:55.681020 3508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0717 10:32:55.694280 3508 ssh_runner.go:195] Run: openssl version
I0717 10:32:55.698669 3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0717 10:32:55.707011 3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0717 10:32:55.710297 3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:11 /usr/share/ca-certificates/minikubeCA.pem
I0717 10:32:55.710338 3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0717 10:32:55.714541 3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0717 10:32:55.722665 3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1639.pem && ln -fs /usr/share/ca-certificates/1639.pem /etc/ssl/certs/1639.pem"
I0717 10:32:55.730951 3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1639.pem
I0717 10:32:55.734212 3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:20 /usr/share/ca-certificates/1639.pem
I0717 10:32:55.734256 3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1639.pem
I0717 10:32:55.738428 3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1639.pem /etc/ssl/certs/51391683.0"
I0717 10:32:55.746621 3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16392.pem && ln -fs /usr/share/ca-certificates/16392.pem /etc/ssl/certs/16392.pem"
I0717 10:32:55.754849 3508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16392.pem
I0717 10:32:55.758298 3508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:20 /usr/share/ca-certificates/16392.pem
I0717 10:32:55.758341 3508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16392.pem
I0717 10:32:55.762565 3508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16392.pem /etc/ssl/certs/3ec20f2e.0"
I0717 10:32:55.770829 3508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0717 10:32:55.774715 3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0717 10:32:55.780174 3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0717 10:32:55.784640 3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0717 10:32:55.789061 3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0717 10:32:55.793372 3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0717 10:32:55.797672 3508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0717 10:32:55.802149 3508 kubeadm.go:392] StartCluster: {Name:ha-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:ha-572000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 10:32:55.802263 3508 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0717 10:32:55.813831 3508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0717 10:32:55.821229 3508 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0717 10:32:55.821245 3508 kubeadm.go:593] restartPrimaryControlPlane start ...
I0717 10:32:55.821296 3508 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0717 10:32:55.828842 3508 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0717 10:32:55.829172 3508 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-572000" does not appear in /Users/jenkins/minikube-integration/19283-1099/kubeconfig
I0717 10:32:55.829253 3508 kubeconfig.go:62] /Users/jenkins/minikube-integration/19283-1099/kubeconfig needs updating (will repair): [kubeconfig missing "ha-572000" cluster setting kubeconfig missing "ha-572000" context setting]
I0717 10:32:55.829432 3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 10:32:55.829834 3508 loader.go:395] Config loaded from file: /Users/jenkins/minikube-integration/19283-1099/kubeconfig
I0717 10:32:55.830028 3508 kapi.go:59] client config for ha-572000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/client.key", CAFile:"/Users/jenkins/minikube-integration/19283-1099/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x71e8b40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0717 10:32:55.830325 3508 cert_rotation.go:137] Starting client certificate rotation controller
I0717 10:32:55.830504 3508 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0717 10:32:55.837614 3508 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
I0717 10:32:55.837631 3508 kubeadm.go:597] duration metric: took 16.382346ms to restartPrimaryControlPlane
I0717 10:32:55.837636 3508 kubeadm.go:394] duration metric: took 35.493194ms to StartCluster
I0717 10:32:55.837647 3508 settings.go:142] acquiring lock: {Name:mkc45f011a907c66e2dbca7dadfff37ab48f7d33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 10:32:55.837726 3508 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/19283-1099/kubeconfig
I0717 10:32:55.838160 3508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19283-1099/kubeconfig: {Name:mk47c0ee7be69c05382b12c14b16d695a32165a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 10:32:55.838398 3508 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0717 10:32:55.838411 3508 start.go:241] waiting for startup goroutines ...
I0717 10:32:55.838425 3508 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0717 10:32:55.838529 3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:32:55.881476 3508 out.go:177] * Enabled addons:
I0717 10:32:55.902556 3508 addons.go:510] duration metric: took 64.135812ms for enable addons: enabled=[]
I0717 10:32:55.902605 3508 start.go:246] waiting for cluster config update ...
I0717 10:32:55.902617 3508 start.go:255] writing updated cluster config ...
I0717 10:32:55.924553 3508 out.go:177]
I0717 10:32:55.945720 3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:32:55.945818 3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
I0717 10:32:55.967938 3508 out.go:177] * Starting "ha-572000-m02" control-plane node in "ha-572000" cluster
I0717 10:32:56.010383 3508 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0717 10:32:56.010417 3508 cache.go:56] Caching tarball of preloaded images
I0717 10:32:56.010593 3508 preload.go:172] Found /Users/jenkins/minikube-integration/19283-1099/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0717 10:32:56.010613 3508 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
I0717 10:32:56.010735 3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
I0717 10:32:56.011714 3508 start.go:360] acquireMachinesLock for ha-572000-m02: {Name:mk35dc96ec34810d74a098085edb80c9e36fb4a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0717 10:32:56.011815 3508 start.go:364] duration metric: took 76.983µs to acquireMachinesLock for "ha-572000-m02"
I0717 10:32:56.011840 3508 start.go:96] Skipping create...Using existing machine configuration
I0717 10:32:56.011849 3508 fix.go:54] fixHost starting: m02
I0717 10:32:56.012268 3508 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0717 10:32:56.012290 3508 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0717 10:32:56.021749 3508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51769
I0717 10:32:56.022134 3508 main.go:141] libmachine: () Calling .GetVersion
I0717 10:32:56.022452 3508 main.go:141] libmachine: Using API Version 1
I0717 10:32:56.022466 3508 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 10:32:56.022707 3508 main.go:141] libmachine: () Calling .GetMachineName
I0717 10:32:56.022831 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
I0717 10:32:56.022920 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetState
I0717 10:32:56.023010 3508 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0717 10:32:56.023088 3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3461
I0717 10:32:56.024015 3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3461 missing from process table
I0717 10:32:56.024031 3508 fix.go:112] recreateIfNeeded on ha-572000-m02: state=Stopped err=<nil>
I0717 10:32:56.024040 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
W0717 10:32:56.024134 3508 fix.go:138] unexpected machine state, will restart: <nil>
I0717 10:32:56.066377 3508 out.go:177] * Restarting existing hyperkit VM for "ha-572000-m02" ...
I0717 10:32:56.087674 3508 main.go:141] libmachine: (ha-572000-m02) Calling .Start
I0717 10:32:56.087950 3508 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0717 10:32:56.087999 3508 main.go:141] libmachine: (ha-572000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid
I0717 10:32:56.089806 3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid 3461 missing from process table
I0717 10:32:56.089821 3508 main.go:141] libmachine: (ha-572000-m02) DBG | pid 3461 is in state "Stopped"
I0717 10:32:56.089839 3508 main.go:141] libmachine: (ha-572000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid...
I0717 10:32:56.090122 3508 main.go:141] libmachine: (ha-572000-m02) DBG | Using UUID b5da5881-83da-4916-aec8-9a96c30c8c05
I0717 10:32:56.117133 3508 main.go:141] libmachine: (ha-572000-m02) DBG | Generated MAC 2:60:33:0:68:8b
I0717 10:32:56.117180 3508 main.go:141] libmachine: (ha-572000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000
I0717 10:32:56.117265 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0717 10:32:56.117293 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b5da5881-83da-4916-aec8-9a96c30c8c05", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bea20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0717 10:32:56.117357 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b5da5881-83da-4916-aec8-9a96c30c8c05", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machine
s/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"}
I0717 10:32:56.117402 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b5da5881-83da-4916-aec8-9a96c30c8c05 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/ha-572000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/tty,log=/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/bzimage,/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-572000"
I0717 10:32:56.117418 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0717 10:32:56.118762 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 DEBUG: hyperkit: Pid is 3526
I0717 10:32:56.119239 3508 main.go:141] libmachine: (ha-572000-m02) DBG | Attempt 0
I0717 10:32:56.119252 3508 main.go:141] libmachine: (ha-572000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0717 10:32:56.119326 3508 main.go:141] libmachine: (ha-572000-m02) DBG | hyperkit pid from json: 3526
I0717 10:32:56.121158 3508 main.go:141] libmachine: (ha-572000-m02) DBG | Searching for 2:60:33:0:68:8b in /var/db/dhcpd_leases ...
I0717 10:32:56.121244 3508 main.go:141] libmachine: (ha-572000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
I0717 10:32:56.121275 3508 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:a6:10:ad:80:98 ID:1,d2:a6:10:ad:80:98 Lease:0x669951be}
I0717 10:32:56.121292 3508 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:1e:37:45:6a:f1:7f ID:1,1e:37:45:6a:f1:7f Lease:0x6698001b}
I0717 10:32:56.121303 3508 main.go:141] libmachine: (ha-572000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:2:60:33:0:68:8b ID:1,2:60:33:0:68:8b Lease:0x6699517a}
I0717 10:32:56.121311 3508 main.go:141] libmachine: (ha-572000-m02) DBG | Found match: 2:60:33:0:68:8b
I0717 10:32:56.121322 3508 main.go:141] libmachine: (ha-572000-m02) DBG | IP: 192.169.0.6
I0717 10:32:56.121381 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetConfigRaw
I0717 10:32:56.122119 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
I0717 10:32:56.122366 3508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19283-1099/.minikube/profiles/ha-572000/config.json ...
I0717 10:32:56.122967 3508 machine.go:94] provisionDockerMachine start ...
I0717 10:32:56.122978 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
I0717 10:32:56.123097 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:32:56.123191 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:32:56.123279 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:32:56.123377 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:32:56.123509 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:32:56.123686 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:32:56.123860 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.6 22 <nil> <nil>}
I0717 10:32:56.123869 3508 main.go:141] libmachine: About to run SSH command:
hostname
I0717 10:32:56.127424 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0717 10:32:56.136905 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0717 10:32:56.138099 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0717 10:32:56.138119 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0717 10:32:56.138127 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0717 10:32:56.138133 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0717 10:32:56.517427 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0717 10:32:56.517452 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0717 10:32:56.632129 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0717 10:32:56.632146 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0717 10:32:56.632154 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0717 10:32:56.632161 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0717 10:32:56.632978 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0717 10:32:56.632987 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:32:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0717 10:33:01.882277 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0717 10:33:01.882372 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0717 10:33:01.882381 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0717 10:33:01.905950 3508 main.go:141] libmachine: (ha-572000-m02) DBG | 2024/07/17 10:33:01 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
I0717 10:33:07.183510 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0717 10:33:07.183524 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
I0717 10:33:07.183678 3508 buildroot.go:166] provisioning hostname "ha-572000-m02"
I0717 10:33:07.183687 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
I0717 10:33:07.183789 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:07.183881 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:07.183992 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.184084 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.184179 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:07.184316 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:33:07.184458 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.6 22 <nil> <nil>}
I0717 10:33:07.184466 3508 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-572000-m02 && echo "ha-572000-m02" | sudo tee /etc/hostname
I0717 10:33:07.250039 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-572000-m02
I0717 10:33:07.250065 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:07.250206 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:07.250287 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.250390 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.250483 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:07.250636 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:33:07.250802 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.6 22 <nil> <nil>}
I0717 10:33:07.250815 3508 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-572000-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-572000-m02/g' /etc/hosts;
else
echo '127.0.1.1 ha-572000-m02' | sudo tee -a /etc/hosts;
fi
fi
I0717 10:33:07.311401 3508 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0717 10:33:07.311420 3508 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19283-1099/.minikube CaCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19283-1099/.minikube}
I0717 10:33:07.311431 3508 buildroot.go:174] setting up certificates
I0717 10:33:07.311441 3508 provision.go:84] configureAuth start
I0717 10:33:07.311448 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetMachineName
I0717 10:33:07.311593 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
I0717 10:33:07.311680 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:07.311768 3508 provision.go:143] copyHostCerts
I0717 10:33:07.311797 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
I0717 10:33:07.311852 3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem, removing ...
I0717 10:33:07.311858 3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem
I0717 10:33:07.312271 3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/cert.pem (1123 bytes)
I0717 10:33:07.312505 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
I0717 10:33:07.312536 3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem, removing ...
I0717 10:33:07.312541 3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem
I0717 10:33:07.312619 3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/key.pem (1679 bytes)
I0717 10:33:07.312779 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
I0717 10:33:07.312811 3508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem, removing ...
I0717 10:33:07.312816 3508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem
I0717 10:33:07.312912 3508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19283-1099/.minikube/ca.pem (1078 bytes)
I0717 10:33:07.313069 3508 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca-key.pem org=jenkins.ha-572000-m02 san=[127.0.0.1 192.169.0.6 ha-572000-m02 localhost minikube]
I0717 10:33:07.375154 3508 provision.go:177] copyRemoteCerts
I0717 10:33:07.375212 3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0717 10:33:07.375227 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:07.375382 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:07.375473 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.375558 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:07.375656 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
I0717 10:33:07.409433 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0717 10:33:07.409505 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0717 10:33:07.429479 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem -> /etc/docker/server.pem
I0717 10:33:07.429539 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0717 10:33:07.451163 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0717 10:33:07.451231 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0717 10:33:07.471509 3508 provision.go:87] duration metric: took 160.057268ms to configureAuth
I0717 10:33:07.471523 3508 buildroot.go:189] setting minikube options for container-runtime
I0717 10:33:07.471702 3508 config.go:182] Loaded profile config "ha-572000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 10:33:07.471715 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
I0717 10:33:07.471860 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:07.471964 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:07.472045 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.472140 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.472216 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:07.472319 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:33:07.472438 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.6 22 <nil> <nil>}
I0717 10:33:07.472446 3508 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0717 10:33:07.526742 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0717 10:33:07.526766 3508 buildroot.go:70] root file system type: tmpfs
I0717 10:33:07.526848 3508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0717 10:33:07.526860 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:07.526992 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:07.527094 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.527175 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.527248 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:07.527375 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:33:07.527510 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.6 22 <nil> <nil>}
I0717 10:33:07.527555 3508 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.169.0.5"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0717 10:33:07.594480 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.169.0.5
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0717 10:33:07.594502 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:07.594640 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:07.594720 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.594808 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:07.594894 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:07.595019 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:33:07.595164 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.6 22 <nil> <nil>}
I0717 10:33:07.595178 3508 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0717 10:33:09.291500 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0717 10:33:09.291515 3508 machine.go:97] duration metric: took 13.164785942s to provisionDockerMachine
I0717 10:33:09.291524 3508 start.go:293] postStartSetup for "ha-572000-m02" (driver="hyperkit")
I0717 10:33:09.291531 3508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0717 10:33:09.291546 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
I0717 10:33:09.291729 3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0717 10:33:09.291743 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:09.291855 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:09.291956 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:09.292049 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:09.292155 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
I0717 10:33:09.335381 3508 ssh_runner.go:195] Run: cat /etc/os-release
I0717 10:33:09.338532 3508 info.go:137] Remote host: Buildroot 2023.02.9
I0717 10:33:09.338541 3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/addons for local assets ...
I0717 10:33:09.338631 3508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19283-1099/.minikube/files for local assets ...
I0717 10:33:09.338771 3508 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> 16392.pem in /etc/ssl/certs
I0717 10:33:09.338778 3508 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem -> /etc/ssl/certs/16392.pem
I0717 10:33:09.338937 3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0717 10:33:09.346285 3508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19283-1099/.minikube/files/etc/ssl/certs/16392.pem --> /etc/ssl/certs/16392.pem (1708 bytes)
I0717 10:33:09.366379 3508 start.go:296] duration metric: took 74.672934ms for postStartSetup
I0717 10:33:09.366399 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
I0717 10:33:09.366579 3508 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0717 10:33:09.366592 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:09.366681 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:09.366764 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:09.366841 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:09.366910 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
I0717 10:33:09.399615 3508 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0717 10:33:09.399679 3508 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0717 10:33:09.453746 3508 fix.go:56] duration metric: took 13.437754461s for fixHost
I0717 10:33:09.453771 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:09.453917 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:09.454023 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:09.454133 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:09.454219 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:09.454344 3508 main.go:141] libmachine: Using SSH client type: native
I0717 10:33:09.454500 3508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x5d44060] 0x5d46dc0 <nil> [] 0s} 192.169.0.6 22 <nil> <nil>}
I0717 10:33:09.454509 3508 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0717 10:33:09.507516 3508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237589.628548940
I0717 10:33:09.507529 3508 fix.go:216] guest clock: 1721237589.628548940
I0717 10:33:09.507535 3508 fix.go:229] Guest: 2024-07-17 10:33:09.62854894 -0700 PDT Remote: 2024-07-17 10:33:09.453761 -0700 PDT m=+32.267325038 (delta=174.78794ms)
I0717 10:33:09.507545 3508 fix.go:200] guest clock delta is within tolerance: 174.78794ms
I0717 10:33:09.507551 3508 start.go:83] releasing machines lock for "ha-572000-m02", held for 13.491465012s
I0717 10:33:09.507572 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
I0717 10:33:09.507699 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetIP
I0717 10:33:09.532514 3508 out.go:177] * Found network options:
I0717 10:33:09.552891 3508 out.go:177] - NO_PROXY=192.169.0.5
W0717 10:33:09.574387 3508 proxy.go:119] fail to check proxy env: Error ip not in block
I0717 10:33:09.574424 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
I0717 10:33:09.575230 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
I0717 10:33:09.575434 3508 main.go:141] libmachine: (ha-572000-m02) Calling .DriverName
I0717 10:33:09.575533 3508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0717 10:33:09.575579 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
W0717 10:33:09.575674 3508 proxy.go:119] fail to check proxy env: Error ip not in block
I0717 10:33:09.575742 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:09.575769 3508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0717 10:33:09.575787 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHHostname
I0717 10:33:09.575982 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:09.576003 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHPort
I0717 10:33:09.576234 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHKeyPath
I0717 10:33:09.576305 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:09.576479 3508 main.go:141] libmachine: (ha-572000-m02) Calling .GetSSHUsername
I0717 10:33:09.576483 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
I0717 10:33:09.576596 3508 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19283-1099/.minikube/machines/ha-572000-m02/id_rsa Username:docker}
W0717 10:33:09.607732 3508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0717 10:33:09.607792 3508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0717 10:33:09.656923 3508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0717 10:33:09.656940 3508 start.go:495] detecting cgroup driver to use...
I0717 10:33:09.657029 3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0717 10:33:09.673202 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0717 10:33:09.682149 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0717 10:33:09.691293 3508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0717 10:33:09.691348 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0717 10:33:09.700430 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 10:33:09.709231 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0717 10:33:09.718168 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 10:33:09.727036 3508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0717 10:33:09.736298 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0717 10:33:09.745642 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0717 10:33:09.754690 3508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0717 10:33:09.763621 3508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0717 10:33:09.771717 3508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0717 10:33:09.779861 3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 10:33:09.883183 3508 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0717 10:33:09.901989 3508 start.go:495] detecting cgroup driver to use...
I0717 10:33:09.902056 3508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0717 10:33:09.919371 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0717 10:33:09.932597 3508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0717 10:33:09.953462 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0717 10:33:09.964583 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0717 10:33:09.975437 3508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0717 10:33:09.995754 3508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0717 10:33:10.006015 3508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0717 10:33:10.020825 3508 ssh_runner.go:195] Run: which cri-dockerd
I0717 10:33:10.023692 3508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0717 10:33:10.030648 3508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0717 10:33:10.044228 3508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0717 10:33:10.141170 3508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0717 10:33:10.249186 3508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0717 10:33:10.249214 3508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0717 10:33:10.263041 3508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 10:33:10.359716 3508 ssh_runner.go:195] Run: sudo systemctl restart docker
I0717 10:34:11.416224 3508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.021941021s)
I0717 10:34:11.416300 3508 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0717 10:34:11.450835 3508 out.go:177]
W0717 10:34:11.471671 3508 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
Jul 17 17:33:08 ha-572000-m02 systemd[1]: Starting Docker Application Container Engine...
Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.044876852Z" level=info msg="Starting up"
Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.045449556Z" level=info msg="containerd not running, starting managed containerd"
Jul 17 17:33:08 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:08.049003475Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=496
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.064081003Z" level=info msg="starting containerd" revision=2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 version=v1.7.19
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079222179Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079310364Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079376764Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079411371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079557600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079609621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079752864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079797312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079887739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.079928799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.080046807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.080239575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.081923027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.081977822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082123136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082166838Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082275842Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.082332754Z" level=info msg="metadata content store policy set" policy=shared
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084273060Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084339651Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084378389Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084411359Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084442922Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084509418Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084664339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084738339Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084774254Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084804627Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084874943Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084911894Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084942267Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.084972768Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085003365Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085032856Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085062302Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085090775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085129743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085161980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085192066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085224112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085253798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085286177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085315810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085345112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085374976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085410351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085440979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085471089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085500214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085532017Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085571085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085603089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085635203Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085683933Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085717630Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085747936Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085777505Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085805608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085834007Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.085861655Z" level=info msg="NRI interface is disabled by configuration."
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086142807Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086206245Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086259095Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jul 17 17:33:08 ha-572000-m02 dockerd[496]: time="2024-07-17T17:33:08.086322237Z" level=info msg="containerd successfully booted in 0.022994s"
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.065923436Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.108166477Z" level=info msg="Loading containers: start."
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.277192209Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.336888641Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.380874805Z" level=info msg="Loading containers: done."
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.387565385Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.387757279Z" level=info msg="Daemon has completed initialization"
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.411794010Z" level=info msg="API listen on /var/run/docker.sock"
Jul 17 17:33:09 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:09.411982753Z" level=info msg="API listen on [::]:2376"
Jul 17 17:33:09 ha-572000-m02 systemd[1]: Started Docker Application Container Engine.
Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.490943827Z" level=info msg="Processing signal 'terminated'"
Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.491923813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.491997518Z" level=info msg="Daemon shutdown complete"
Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.492029261Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jul 17 17:33:10 ha-572000-m02 dockerd[490]: time="2024-07-17T17:33:10.492040420Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jul 17 17:33:10 ha-572000-m02 systemd[1]: Stopping Docker Application Container Engine...
Jul 17 17:33:11 ha-572000-m02 systemd[1]: docker.service: Deactivated successfully.
Jul 17 17:33:11 ha-572000-m02 systemd[1]: Stopped Docker Application Container Engine.
Jul 17 17:33:11 ha-572000-m02 systemd[1]: Starting Docker Application Container Engine...
Jul 17 17:33:11 ha-572000-m02 dockerd[1164]: time="2024-07-17T17:33:11.528450348Z" level=info msg="Starting up"
Jul 17 17:34:11 ha-572000-m02 dockerd[1164]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 17 17:34:11 ha-572000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 17 17:34:11 ha-572000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 17 17:34:11 ha-572000-m02 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
W0717 10:34:11.471802 3508 out.go:239] *
W0717 10:34:11.473037 3508 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0717 10:34:11.536857 3508 out.go:177]
==> Docker <==
Jul 17 17:33:02 ha-572000 dockerd[1178]: time="2024-07-17T17:33:02.455192722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 17:33:23 ha-572000 dockerd[1178]: time="2024-07-17T17:33:23.740501414Z" level=info msg="shim disconnected" id=a6e0742139d63216679ae18bc09ccda23c0ed0e4d7a419b36e966729969bea9b namespace=moby
Jul 17 17:33:23 ha-572000 dockerd[1178]: time="2024-07-17T17:33:23.740886535Z" level=warning msg="cleaning up after shim disconnected" id=a6e0742139d63216679ae18bc09ccda23c0ed0e4d7a419b36e966729969bea9b namespace=moby
Jul 17 17:33:23 ha-572000 dockerd[1178]: time="2024-07-17T17:33:23.741204478Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 17 17:33:23 ha-572000 dockerd[1171]: time="2024-07-17T17:33:23.741723202Z" level=info msg="ignoring event" container=a6e0742139d63216679ae18bc09ccda23c0ed0e4d7a419b36e966729969bea9b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 17 17:33:24 ha-572000 dockerd[1178]: time="2024-07-17T17:33:24.747049658Z" level=info msg="shim disconnected" id=3f41d1a5d2c920e02b958fc57cddcf6e76eb7e4b56f3544ba5066b682eea0c8d namespace=moby
Jul 17 17:33:24 ha-572000 dockerd[1178]: time="2024-07-17T17:33:24.747592119Z" level=warning msg="cleaning up after shim disconnected" id=3f41d1a5d2c920e02b958fc57cddcf6e76eb7e4b56f3544ba5066b682eea0c8d namespace=moby
Jul 17 17:33:24 ha-572000 dockerd[1178]: time="2024-07-17T17:33:24.747636154Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 17 17:33:24 ha-572000 dockerd[1171]: time="2024-07-17T17:33:24.747788453Z" level=info msg="ignoring event" container=3f41d1a5d2c920e02b958fc57cddcf6e76eb7e4b56f3544ba5066b682eea0c8d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 17 17:33:34 ha-572000 dockerd[1178]: time="2024-07-17T17:33:34.836028865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 17 17:33:34 ha-572000 dockerd[1178]: time="2024-07-17T17:33:34.836093957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 17 17:33:34 ha-572000 dockerd[1178]: time="2024-07-17T17:33:34.836105101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 17:33:34 ha-572000 dockerd[1178]: time="2024-07-17T17:33:34.836225522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 17:33:38 ha-572000 dockerd[1178]: time="2024-07-17T17:33:38.652806846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 17 17:33:38 ha-572000 dockerd[1178]: time="2024-07-17T17:33:38.652893670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 17 17:33:38 ha-572000 dockerd[1178]: time="2024-07-17T17:33:38.652906541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 17:33:38 ha-572000 dockerd[1178]: time="2024-07-17T17:33:38.657845113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 17 17:33:59 ha-572000 dockerd[1171]: time="2024-07-17T17:33:59.069677227Z" level=info msg="ignoring event" container=8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 17 17:33:59 ha-572000 dockerd[1178]: time="2024-07-17T17:33:59.071115848Z" level=info msg="shim disconnected" id=8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f namespace=moby
Jul 17 17:33:59 ha-572000 dockerd[1178]: time="2024-07-17T17:33:59.071609934Z" level=warning msg="cleaning up after shim disconnected" id=8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f namespace=moby
Jul 17 17:33:59 ha-572000 dockerd[1178]: time="2024-07-17T17:33:59.071768605Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 17 17:34:00 ha-572000 dockerd[1171]: time="2024-07-17T17:34:00.079691666Z" level=info msg="ignoring event" container=1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 17 17:34:00 ha-572000 dockerd[1178]: time="2024-07-17T17:34:00.081342846Z" level=info msg="shim disconnected" id=1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca namespace=moby
Jul 17 17:34:00 ha-572000 dockerd[1178]: time="2024-07-17T17:34:00.081524291Z" level=warning msg="cleaning up after shim disconnected" id=1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca namespace=moby
Jul 17 17:34:00 ha-572000 dockerd[1178]: time="2024-07-17T17:34:00.081549356Z" level=info msg="cleaning up dead shim" namespace=moby
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
8f09c09ed996a 56ce0fd9fb532 34 seconds ago Exited kube-apiserver 2 6d7eb0e874999 kube-apiserver-ha-572000
1e8f9939826f4 e874818b3caac 38 seconds ago Exited kube-controller-manager 2 b7d58c526c444 kube-controller-manager-ha-572000
138bf6784d59c 38af8ddebf499 About a minute ago Running kube-vip 0 df04438a4c5cc kube-vip-ha-572000
a53f8fcdf5d97 7820c83aa1394 About a minute ago Running kube-scheduler 1 bfd880612991e kube-scheduler-ha-572000
a3398a8ca33aa 3861cfcd7c04c About a minute ago Running etcd 1 986ceb5a6f870 etcd-ha-572000
e1a5eb1bed550 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12 3 minutes ago Exited busybox 0 29ab413131af2 busybox-fc5497c4f-5r4wl
bb44d784bb7ab cbb01a7bd410d 6 minutes ago Exited coredns 0 b8c622f08395f coredns-7db6d8ff4d-2phrp
7b275812468c9 cbb01a7bd410d 6 minutes ago Exited coredns 0 2588bd7c40c23 coredns-7db6d8ff4d-9dzd5
12ba2e181ee9a 6e38f40d628db 6 minutes ago Exited storage-provisioner 0 04b7cdcbedf20 storage-provisioner
6e40e1427ab20 kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493 6 minutes ago Exited kindnet-cni 0 32dc836c0a2df kindnet-t85bv
2aeed19835352 53c535741fb44 6 minutes ago Exited kube-proxy 0 f688e08d591be kube-proxy-hst7h
9200160f355ce ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f 6 minutes ago Exited kube-vip 0 1742f4f388abf kube-vip-ha-572000
e29f4fe295c1c 7820c83aa1394 6 minutes ago Exited kube-scheduler 0 25d825604d9f6 kube-scheduler-ha-572000
c6527d620dad2 3861cfcd7c04c 6 minutes ago Exited etcd 0 8844aab508d79 etcd-ha-572000
==> coredns [7b275812468c] <==
[INFO] 10.244.0.4:49035 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001628883s
[INFO] 10.244.0.4:59665 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081818s
[INFO] 10.244.0.4:52274 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077378s
[INFO] 10.244.1.2:47113 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103907s
[INFO] 10.244.2.2:57796 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060561s
[INFO] 10.244.2.2:40339 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000425199s
[INFO] 10.244.2.2:38339 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010468s
[INFO] 10.244.2.2:50750 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070547s
[INFO] 10.244.0.4:57426 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000045039s
[INFO] 10.244.0.4:44283 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000031195s
[INFO] 10.244.1.2:56783 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117359s
[INFO] 10.244.1.2:34315 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000061358s
[INFO] 10.244.1.2:55792 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062223s
[INFO] 10.244.2.2:35768 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000086426s
[INFO] 10.244.2.2:42473 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102022s
[INFO] 10.244.0.4:53524 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000087177s
[INFO] 10.244.0.4:35224 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079092s
[INFO] 10.244.0.4:53020 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075585s
[INFO] 10.244.1.2:58074 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138293s
[INFO] 10.244.1.2:40567 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000088337s
[INFO] 10.244.1.2:46301 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000065328s
[INFO] 10.244.1.2:59898 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000075157s
[INFO] 10.244.2.2:48754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000081888s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [bb44d784bb7a] <==
[INFO] 10.244.0.4:54052 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.001019206s
[INFO] 10.244.0.4:45412 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000077905s
[INFO] 10.244.0.4:37924 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159382s
[INFO] 10.244.1.2:45862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108196s
[INFO] 10.244.1.2:47464 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000616154s
[INFO] 10.244.1.2:53720 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059338s
[INFO] 10.244.1.2:49774 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123053s
[INFO] 10.244.1.2:54472 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000097287s
[INFO] 10.244.1.2:52054 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064095s
[INFO] 10.244.1.2:54428 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064142s
[INFO] 10.244.2.2:53088 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109393s
[INFO] 10.244.2.2:49993 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000114279s
[INFO] 10.244.2.2:42708 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063661s
[INFO] 10.244.2.2:57150 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000043434s
[INFO] 10.244.0.4:45987 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112397s
[INFO] 10.244.0.4:44116 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057797s
[INFO] 10.244.1.2:37635 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105428s
[INFO] 10.244.2.2:52081 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131248s
[INFO] 10.244.2.2:54912 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087858s
[INFO] 10.244.0.4:53981 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079367s
[INFO] 10.244.2.2:45850 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152036s
[INFO] 10.244.2.2:36004 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000223s
[INFO] 10.244.2.2:54459 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000128834s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E0717 17:34:12.970234 2557 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
E0717 17:34:12.971580 2557 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
E0717 17:34:12.972853 2557 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
E0717 17:34:12.973489 2557 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
E0717 17:34:12.975487 2557 memcache.go:265] couldn't get current server API group list: Get "https://localhost:8443/api?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused
The connection to the server localhost:8443 was refused - did you specify the right host or port?
==> dmesg <==
[ +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
[ +0.006777] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +2.574354] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
[ +1.320177] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +1.823982] systemd-fstab-generator[476]: Ignoring "noauto" option for root device
[ +0.112634] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
[ +1.921936] systemd-fstab-generator[1101]: Ignoring "noauto" option for root device
[ +0.055582] kauditd_printk_skb: 79 callbacks suppressed
[ +0.184169] systemd-fstab-generator[1137]: Ignoring "noauto" option for root device
[ +0.104772] systemd-fstab-generator[1149]: Ignoring "noauto" option for root device
[ +0.113810] systemd-fstab-generator[1163]: Ignoring "noauto" option for root device
[ +2.482664] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
[ +0.099717] systemd-fstab-generator[1391]: Ignoring "noauto" option for root device
[ +0.099983] systemd-fstab-generator[1403]: Ignoring "noauto" option for root device
[ +0.118727] systemd-fstab-generator[1418]: Ignoring "noauto" option for root device
[ +0.437794] systemd-fstab-generator[1575]: Ignoring "noauto" option for root device
[Jul17 17:33] kauditd_printk_skb: 234 callbacks suppressed
[ +21.575073] kauditd_printk_skb: 40 callbacks suppressed
[ +31.253030] clocksource: timekeeping watchdog on CPU1: Marking clocksource 'tsc' as unstable because the skew is too large:
[ +0.000025] clocksource: 'hpet' wd_now: 2db2a3c3 wd_last: 2d0e4271 mask: ffffffff
[ +0.000022] clocksource: 'tsc' cs_now: 5d6b30d2ea8 cs_last: 5d5e653cfb0 mask: ffffffffffffffff
[ +0.001528] TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
[ +0.002348] clocksource: Checking clocksource tsc synchronization from CPU 0.
==> etcd [a3398a8ca33a] <==
{"level":"info","ts":"2024-07-17T17:34:06.481084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
{"level":"warn","ts":"2024-07-17T17:34:07.932636Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
{"level":"warn","ts":"2024-07-17T17:34:07.932741Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: no route to host"}
{"level":"warn","ts":"2024-07-17T17:34:07.932687Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: no route to host"}
{"level":"warn","ts":"2024-07-17T17:34:07.933493Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
{"level":"info","ts":"2024-07-17T17:34:08.181523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
{"level":"info","ts":"2024-07-17T17:34:08.181875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
{"level":"info","ts":"2024-07-17T17:34:08.182868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
{"level":"info","ts":"2024-07-17T17:34:08.183418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
{"level":"info","ts":"2024-07-17T17:34:08.183717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
{"level":"info","ts":"2024-07-17T17:34:09.88894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
{"level":"info","ts":"2024-07-17T17:34:09.889285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
{"level":"info","ts":"2024-07-17T17:34:09.88953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
{"level":"info","ts":"2024-07-17T17:34:09.890409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
{"level":"info","ts":"2024-07-17T17:34:09.890542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
{"level":"info","ts":"2024-07-17T17:34:11.580196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 2"}
{"level":"info","ts":"2024-07-17T17:34:11.580274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 2"}
{"level":"info","ts":"2024-07-17T17:34:11.580332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 2"}
{"level":"info","ts":"2024-07-17T17:34:11.580874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to 1d3f36ee75516151 at term 2"}
{"level":"info","ts":"2024-07-17T17:34:11.58095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 2, index: 1902] sent MsgPreVote request to f6a6d94fe6ea5bc8 at term 2"}
{"level":"warn","ts":"2024-07-17T17:34:12.931535Z","caller":"etcdserver/server.go:2089","msg":"failed to publish local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-572000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
{"level":"warn","ts":"2024-07-17T17:34:12.933019Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
{"level":"warn","ts":"2024-07-17T17:34:12.933094Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
{"level":"warn","ts":"2024-07-17T17:34:12.934267Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f6a6d94fe6ea5bc8","rtt":"0s","error":"dial tcp 192.169.0.6:2380: connect: connection refused"}
{"level":"warn","ts":"2024-07-17T17:34:12.934292Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1d3f36ee75516151","rtt":"0s","error":"dial tcp 192.169.0.7:2380: i/o timeout"}
==> etcd [c6527d620dad] <==
{"level":"warn","ts":"2024-07-17T17:32:29.48769Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T17:32:24.462555Z","time spent":"5.025134128s","remote":"127.0.0.1:36734","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
2024/07/17 17:32:29 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
{"level":"warn","ts":"2024-07-17T17:32:29.48774Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T17:32:25.149839Z","time spent":"4.337900582s","remote":"127.0.0.1:45174","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
2024/07/17 17:32:29 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
{"level":"warn","ts":"2024-07-17T17:32:29.512674Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
{"level":"warn","ts":"2024-07-17T17:32:29.512703Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
{"level":"info","ts":"2024-07-17T17:32:29.512731Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
{"level":"warn","ts":"2024-07-17T17:32:29.512821Z","caller":"etcdserver/server.go:1165","msg":"failed to revoke lease","lease-id":"584490c1bc074071","error":"etcdserver: request cancelled"}
{"level":"info","ts":"2024-07-17T17:32:29.512836Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f6a6d94fe6ea5bc8"}
{"level":"info","ts":"2024-07-17T17:32:29.512844Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f6a6d94fe6ea5bc8"}
{"level":"info","ts":"2024-07-17T17:32:29.512857Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f6a6d94fe6ea5bc8"}
{"level":"info","ts":"2024-07-17T17:32:29.512905Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"f6a6d94fe6ea5bc8"}
{"level":"info","ts":"2024-07-17T17:32:29.512927Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"f6a6d94fe6ea5bc8"}
{"level":"info","ts":"2024-07-17T17:32:29.512948Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"f6a6d94fe6ea5bc8"}
{"level":"info","ts":"2024-07-17T17:32:29.512956Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f6a6d94fe6ea5bc8"}
{"level":"info","ts":"2024-07-17T17:32:29.51296Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1d3f36ee75516151"}
{"level":"info","ts":"2024-07-17T17:32:29.512966Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1d3f36ee75516151"}
{"level":"info","ts":"2024-07-17T17:32:29.512977Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1d3f36ee75516151"}
{"level":"info","ts":"2024-07-17T17:32:29.513753Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
{"level":"info","ts":"2024-07-17T17:32:29.513778Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
{"level":"info","ts":"2024-07-17T17:32:29.516864Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"1d3f36ee75516151"}
{"level":"info","ts":"2024-07-17T17:32:29.516891Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1d3f36ee75516151"}
{"level":"info","ts":"2024-07-17T17:32:29.518343Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
{"level":"info","ts":"2024-07-17T17:32:29.51839Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
{"level":"info","ts":"2024-07-17T17:32:29.518397Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-572000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
==> kernel <==
17:34:13 up 1 min, 0 users, load average: 0.10, 0.05, 0.01
Linux ha-572000 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kindnet [6e40e1427ab2] <==
I0717 17:31:56.892269 1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24]
I0717 17:32:06.898213 1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
I0717 17:32:06.898253 1 main.go:303] handling current node
I0717 17:32:06.898264 1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
I0717 17:32:06.898269 1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24]
I0717 17:32:06.898416 1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
I0717 17:32:06.898443 1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24]
I0717 17:32:06.898526 1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
I0717 17:32:06.898555 1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24]
I0717 17:32:16.896377 1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
I0717 17:32:16.896415 1 main.go:303] handling current node
I0717 17:32:16.896426 1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
I0717 17:32:16.896432 1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24]
I0717 17:32:16.896606 1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
I0717 17:32:16.896636 1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24]
I0717 17:32:16.896674 1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
I0717 17:32:16.896699 1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24]
I0717 17:32:26.896557 1 main.go:299] Handling node with IPs: map[192.169.0.5:{}]
I0717 17:32:26.896622 1 main.go:303] handling current node
I0717 17:32:26.896678 1 main.go:299] Handling node with IPs: map[192.169.0.6:{}]
I0717 17:32:26.896718 1 main.go:326] Node ha-572000-m02 has CIDR [10.244.1.0/24]
I0717 17:32:26.896938 1 main.go:299] Handling node with IPs: map[192.169.0.7:{}]
I0717 17:32:26.897017 1 main.go:326] Node ha-572000-m03 has CIDR [10.244.2.0/24]
I0717 17:32:26.897158 1 main.go:299] Handling node with IPs: map[192.169.0.8:{}]
I0717 17:32:26.897880 1 main.go:326] Node ha-572000-m04 has CIDR [10.244.3.0/24]
==> kube-apiserver [8f09c09ed996] <==
I0717 17:33:38.766324 1 options.go:221] external host was not specified, using 192.169.0.5
I0717 17:33:38.766955 1 server.go:148] Version: v1.30.2
I0717 17:33:38.767101 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0717 17:33:39.044188 1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
I0717 17:33:39.046954 1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
I0717 17:33:39.049409 1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0717 17:33:39.049435 1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
I0717 17:33:39.049563 1 instance.go:299] Using reconciler: lease
W0717 17:33:59.045294 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
W0717 17:33:59.045986 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
W0717 17:33:59.051243 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
F0717 17:33:59.051294 1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
==> kube-controller-manager [1e8f9939826f] <==
I0717 17:33:35.199426 1 serving.go:380] Generated self-signed cert in-memory
I0717 17:33:35.611724 1 controllermanager.go:189] "Starting" version="v1.30.2"
I0717 17:33:35.611860 1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0717 17:33:35.612992 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0717 17:33:35.613172 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0717 17:33:35.613294 1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
I0717 17:33:35.613433 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
E0717 17:34:00.060238 1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:34220->192.169.0.5:8443: read: connection reset by peer"
==> kube-proxy [2aeed1983535] <==
I0717 17:27:43.315695 1 server_linux.go:69] "Using iptables proxy"
I0717 17:27:43.322673 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
I0717 17:27:43.354011 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
I0717 17:27:43.354032 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0717 17:27:43.354044 1 server_linux.go:165] "Using iptables Proxier"
I0717 17:27:43.355997 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0717 17:27:43.356216 1 server.go:872] "Version info" version="v1.30.2"
I0717 17:27:43.356225 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0717 17:27:43.356874 1 config.go:192] "Starting service config controller"
I0717 17:27:43.356903 1 shared_informer.go:313] Waiting for caches to sync for service config
I0717 17:27:43.356921 1 config.go:319] "Starting node config controller"
I0717 17:27:43.356943 1 shared_informer.go:313] Waiting for caches to sync for node config
I0717 17:27:43.357077 1 config.go:101] "Starting endpoint slice config controller"
I0717 17:27:43.357144 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0717 17:27:43.457513 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0717 17:27:43.457607 1 shared_informer.go:320] Caches are synced for node config
I0717 17:27:43.457639 1 shared_informer.go:320] Caches are synced for service config
==> kube-scheduler [a53f8fcdf5d9] <==
Trace[1679676222]: ---"Objects listed" error:Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (17:33:55.829)
Trace[1679676222]: [10.002148793s] [10.002148793s] END
E0717 17:33:55.829461 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": net/http: TLS handshake timeout
W0717 17:34:00.059485 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59664->192.169.0.5:8443: read: connection reset by peer
E0717 17:34:00.060786 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.169.0.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused - error from a previous attempt: read tcp 192.169.0.5:59664->192.169.0.5:8443: read: connection reset by peer
W0717 17:34:07.801237 1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
E0717 17:34:07.801736 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
W0717 17:34:08.253794 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
E0717 17:34:08.253923 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.169.0.5:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
W0717 17:34:08.255527 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
E0717 17:34:08.255685 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
W0717 17:34:09.119507 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
E0717 17:34:09.120276 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.169.0.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
W0717 17:34:09.844542 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
E0717 17:34:09.845089 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.169.0.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
W0717 17:34:11.002782 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
E0717 17:34:11.003425 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.169.0.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
W0717 17:34:11.004959 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
E0717 17:34:11.005426 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
W0717 17:34:11.724425 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
E0717 17:34:11.724599 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.169.0.5:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
W0717 17:34:13.328905 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
E0717 17:34:13.329009 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.169.0.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
W0717 17:34:13.526537 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
E0717 17:34:13.526638 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.169.0.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.169.0.5:8443: connect: connection refused
==> kube-scheduler [e29f4fe295c1] <==
W0717 17:27:25.906822 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0717 17:27:25.906857 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0717 17:27:25.906870 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0717 17:27:25.906912 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0717 17:27:26.715070 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0717 17:27:26.715127 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0717 17:27:26.797242 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0717 17:27:26.797298 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0717 17:27:26.957071 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0717 17:27:26.957111 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0717 17:27:27.013148 1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0717 17:27:27.013190 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0717 17:27:29.895450 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0717 17:30:13.328557 1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-jhz2d\": pod busybox-fc5497c4f-jhz2d is already assigned to node \"ha-572000-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-jhz2d" node="ha-572000-m03"
E0717 17:30:13.329015 1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2f9e6064-727c-486c-b925-3ce5866e42ff(default/busybox-fc5497c4f-jhz2d) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-jhz2d"
E0717 17:30:13.329121 1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-jhz2d\": pod busybox-fc5497c4f-jhz2d is already assigned to node \"ha-572000-m03\"" pod="default/busybox-fc5497c4f-jhz2d"
I0717 17:30:13.329256 1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-jhz2d" node="ha-572000-m03"
E0717 17:30:13.362412 1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-zwhws\": pod busybox-fc5497c4f-zwhws is already assigned to node \"ha-572000\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-zwhws" node="ha-572000"
E0717 17:30:13.362474 1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-zwhws\": pod busybox-fc5497c4f-zwhws is already assigned to node \"ha-572000\"" pod="default/busybox-fc5497c4f-zwhws"
E0717 17:30:13.441720 1 schedule_one.go:1067] "Error occurred" err="Pod default/busybox-fc5497c4f-l7sqr is already present in the active queue" pod="default/busybox-fc5497c4f-l7sqr"
E0717 17:30:39.870609 1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5wcph\": pod kube-proxy-5wcph is already assigned to node \"ha-572000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5wcph" node="ha-572000-m04"
E0717 17:30:39.870661 1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 731f5b57-131e-4e97-b47a-036b8d4edbcd(kube-system/kube-proxy-5wcph) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5wcph"
E0717 17:30:39.870672 1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5wcph\": pod kube-proxy-5wcph is already assigned to node \"ha-572000-m04\"" pod="kube-system/kube-proxy-5wcph"
I0717 17:30:39.870686 1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5wcph" node="ha-572000-m04"
E0717 17:32:29.355082 1 run.go:74] "command failed" err="finished without leader elect"
==> kubelet <==
Jul 17 17:33:55 ha-572000 kubelet[1582]: > table="nat" chain="KUBE-KUBELET-CANARY"
Jul 17 17:33:55 ha-572000 kubelet[1582]: E0717 17:33:55.659345 1582 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-572000\" not found"
Jul 17 17:33:57 ha-572000 kubelet[1582]: E0717 17:33:57.056097 1582 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-572000.17e310749989a167 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-572000,UID:ha-572000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-572000,},FirstTimestamp:2024-07-17 17:32:55.563845991 +0000 UTC m=+0.198827444,LastTimestamp:2024-07-17 17:32:55.563845991 +0000 UTC m=+0.198827444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-572000,}"
Jul 17 17:34:00 ha-572000 kubelet[1582]: I0717 17:34:00.128178 1582 scope.go:117] "RemoveContainer" containerID="3f41d1a5d2c920e02b958fc57cddcf6e76eb7e4b56f3544ba5066b682eea0c8d"
Jul 17 17:34:00 ha-572000 kubelet[1582]: I0717 17:34:00.129823 1582 scope.go:117] "RemoveContainer" containerID="1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca"
Jul 17 17:34:00 ha-572000 kubelet[1582]: E0717 17:34:00.130076 1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-572000_kube-system(967ef612fac8855d0ef3892ca2ae35cc)\"" pod="kube-system/kube-controller-manager-ha-572000" podUID="967ef612fac8855d0ef3892ca2ae35cc"
Jul 17 17:34:00 ha-572000 kubelet[1582]: I0717 17:34:00.145332 1582 scope.go:117] "RemoveContainer" containerID="8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f"
Jul 17 17:34:00 ha-572000 kubelet[1582]: E0717 17:34:00.145681 1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-572000_kube-system(2508736da9632d6a9c9aaf9c250b4f65)\"" pod="kube-system/kube-apiserver-ha-572000" podUID="2508736da9632d6a9c9aaf9c250b4f65"
Jul 17 17:34:00 ha-572000 kubelet[1582]: I0717 17:34:00.150461 1582 scope.go:117] "RemoveContainer" containerID="a6e0742139d63216679ae18bc09ccda23c0ed0e4d7a419b36e966729969bea9b"
Jul 17 17:34:00 ha-572000 kubelet[1582]: I0717 17:34:00.925798 1582 kubelet_node_status.go:73] "Attempting to register node" node="ha-572000"
Jul 17 17:34:03 ha-572000 kubelet[1582]: E0717 17:34:03.198768 1582 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-572000"
Jul 17 17:34:03 ha-572000 kubelet[1582]: E0717 17:34:03.199285 1582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-572000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
Jul 17 17:34:03 ha-572000 kubelet[1582]: I0717 17:34:03.360896 1582 scope.go:117] "RemoveContainer" containerID="1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca"
Jul 17 17:34:03 ha-572000 kubelet[1582]: E0717 17:34:03.361398 1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-572000_kube-system(967ef612fac8855d0ef3892ca2ae35cc)\"" pod="kube-system/kube-controller-manager-ha-572000" podUID="967ef612fac8855d0ef3892ca2ae35cc"
Jul 17 17:34:04 ha-572000 kubelet[1582]: I0717 17:34:04.792263 1582 scope.go:117] "RemoveContainer" containerID="1e8f9939826f42cab723a7f369c1a902565ea9d36b6d0686f9d004580deab7ca"
Jul 17 17:34:04 ha-572000 kubelet[1582]: E0717 17:34:04.792672 1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-572000_kube-system(967ef612fac8855d0ef3892ca2ae35cc)\"" pod="kube-system/kube-controller-manager-ha-572000" podUID="967ef612fac8855d0ef3892ca2ae35cc"
Jul 17 17:34:05 ha-572000 kubelet[1582]: I0717 17:34:05.369319 1582 scope.go:117] "RemoveContainer" containerID="8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f"
Jul 17 17:34:05 ha-572000 kubelet[1582]: E0717 17:34:05.369956 1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-572000_kube-system(2508736da9632d6a9c9aaf9c250b4f65)\"" pod="kube-system/kube-apiserver-ha-572000" podUID="2508736da9632d6a9c9aaf9c250b4f65"
Jul 17 17:34:05 ha-572000 kubelet[1582]: E0717 17:34:05.660481 1582 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-572000\" not found"
Jul 17 17:34:08 ha-572000 kubelet[1582]: I0717 17:34:08.082261 1582 scope.go:117] "RemoveContainer" containerID="8f09c09ed996a44a07a30bb239c72d137dd499aacb063aaef3085dd64ad2139f"
Jul 17 17:34:08 ha-572000 kubelet[1582]: E0717 17:34:08.082649 1582 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-572000_kube-system(2508736da9632d6a9c9aaf9c250b4f65)\"" pod="kube-system/kube-apiserver-ha-572000" podUID="2508736da9632d6a9c9aaf9c250b4f65"
Jul 17 17:34:09 ha-572000 kubelet[1582]: E0717 17:34:09.342982 1582 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-572000.17e310749989a167 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-572000,UID:ha-572000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-572000,},FirstTimestamp:2024-07-17 17:32:55.563845991 +0000 UTC m=+0.198827444,LastTimestamp:2024-07-17 17:32:55.563845991 +0000 UTC m=+0.198827444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-572000,}"
Jul 17 17:34:10 ha-572000 kubelet[1582]: I0717 17:34:10.207057 1582 kubelet_node_status.go:73] "Attempting to register node" node="ha-572000"
Jul 17 17:34:12 ha-572000 kubelet[1582]: E0717 17:34:12.418909 1582 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.169.0.254:8443: connect: no route to host" node="ha-572000"
Jul 17 17:34:12 ha-572000 kubelet[1582]: E0717 17:34:12.419039 1582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-572000?timeout=10s\": dial tcp 192.169.0.254:8443: connect: no route to host" interval="7s"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-572000 -n ha-572000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-572000 -n ha-572000: exit status 2 (156.383656ms)
-- stdout --
Stopped
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-572000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (124.19s)