=== RUN TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run: out/minikube-darwin-amd64 node list -p ha-671000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run: out/minikube-darwin-amd64 stop -p ha-671000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-671000 -v=7 --alsologtostderr: (27.186695254s)
ha_test.go:467: (dbg) Run: out/minikube-darwin-amd64 start -p ha-671000 --wait=true -v=7 --alsologtostderr
E0505 14:21:07.678588 54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 14:22:31.483471 54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:23:23.841471 54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-671000 --wait=true -v=7 --alsologtostderr: exit status 90 (2m56.98572553s)
-- stdout --
* [ha-671000] minikube v1.33.0 on Darwin 14.4.1
- MINIKUBE_LOCATION=18602
- KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the hyperkit driver based on existing profile
* Starting "ha-671000" primary control-plane node in "ha-671000" cluster
* Restarting existing hyperkit VM for "ha-671000" ...
* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
* Enabled addons:
* Starting "ha-671000-m02" control-plane node in "ha-671000" cluster
* Restarting existing hyperkit VM for "ha-671000-m02" ...
* Found network options:
- NO_PROXY=192.169.0.51
* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
- env NO_PROXY=192.169.0.51
* Verifying Kubernetes components...
* Starting "ha-671000-m03" control-plane node in "ha-671000" cluster
* Restarting existing hyperkit VM for "ha-671000-m03" ...
* Found network options:
- NO_PROXY=192.169.0.51,192.169.0.52
-- /stdout --
** stderr **
I0505 14:20:48.965096 56262 out.go:291] Setting OutFile to fd 1 ...
I0505 14:20:48.965304 56262 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:20:48.965309 56262 out.go:304] Setting ErrFile to fd 2...
I0505 14:20:48.965313 56262 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:20:48.965501 56262 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
I0505 14:20:48.966984 56262 out.go:298] Setting JSON to false
I0505 14:20:48.991851 56262 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":19219,"bootTime":1714924829,"procs":425,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
W0505 14:20:48.991949 56262 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0505 14:20:49.013239 56262 out.go:177] * [ha-671000] minikube v1.33.0 on Darwin 14.4.1
I0505 14:20:49.055173 56262 out.go:177] - MINIKUBE_LOCATION=18602
I0505 14:20:49.055223 56262 notify.go:220] Checking for updates...
I0505 14:20:49.077109 56262 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
I0505 14:20:49.097964 56262 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0505 14:20:49.119233 56262 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0505 14:20:49.139935 56262 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
I0505 14:20:49.161146 56262 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0505 14:20:49.182881 56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:20:49.183046 56262 driver.go:392] Setting default libvirt URI to qemu:///system
I0505 14:20:49.183689 56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:20:49.183764 56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:20:49.193369 56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57871
I0505 14:20:49.193700 56262 main.go:141] libmachine: () Calling .GetVersion
I0505 14:20:49.194120 56262 main.go:141] libmachine: Using API Version 1
I0505 14:20:49.194134 56262 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:20:49.194326 56262 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:20:49.194462 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:20:49.223183 56262 out.go:177] * Using the hyperkit driver based on existing profile
I0505 14:20:49.265211 56262 start.go:297] selected driver: hyperkit
I0505 14:20:49.265249 56262 start.go:901] validating driver "hyperkit" against &{Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0505 14:20:49.265473 56262 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0505 14:20:49.265691 56262 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0505 14:20:49.265889 56262 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18602-53665/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0505 14:20:49.275605 56262 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
I0505 14:20:49.280711 56262 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:20:49.280731 56262 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0505 14:20:49.284127 56262 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0505 14:20:49.284202 56262 cni.go:84] Creating CNI manager for ""
I0505 14:20:49.284211 56262 cni.go:136] multinode detected (4 nodes found), recommending kindnet
I0505 14:20:49.284292 56262 start.go:340] cluster config:
{Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false he
lm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0505 14:20:49.284394 56262 iso.go:125] acquiring lock: {Name:mk0da19ac8d2d553b5039d86a6857a5ca35625a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0505 14:20:49.326088 56262 out.go:177] * Starting "ha-671000" primary control-plane node in "ha-671000" cluster
I0505 14:20:49.347002 56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0505 14:20:49.347074 56262 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
I0505 14:20:49.347098 56262 cache.go:56] Caching tarball of preloaded images
I0505 14:20:49.347288 56262 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0505 14:20:49.347306 56262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0505 14:20:49.347472 56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
I0505 14:20:49.348516 56262 start.go:360] acquireMachinesLock for ha-671000: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0505 14:20:49.348656 56262 start.go:364] duration metric: took 99.405µs to acquireMachinesLock for "ha-671000"
I0505 14:20:49.348707 56262 start.go:96] Skipping create...Using existing machine configuration
I0505 14:20:49.348726 56262 fix.go:54] fixHost starting:
I0505 14:20:49.349125 56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:20:49.349160 56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:20:49.358523 56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57873
I0505 14:20:49.358884 56262 main.go:141] libmachine: () Calling .GetVersion
I0505 14:20:49.359279 56262 main.go:141] libmachine: Using API Version 1
I0505 14:20:49.359298 56262 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:20:49.359523 56262 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:20:49.359669 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:20:49.359788 56262 main.go:141] libmachine: (ha-671000) Calling .GetState
I0505 14:20:49.359894 56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:20:49.359963 56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 55694
I0505 14:20:49.360866 56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid 55694 missing from process table
I0505 14:20:49.360926 56262 fix.go:112] recreateIfNeeded on ha-671000: state=Stopped err=<nil>
I0505 14:20:49.360950 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
W0505 14:20:49.361041 56262 fix.go:138] unexpected machine state, will restart: <nil>
I0505 14:20:49.402877 56262 out.go:177] * Restarting existing hyperkit VM for "ha-671000" ...
I0505 14:20:49.423939 56262 main.go:141] libmachine: (ha-671000) Calling .Start
I0505 14:20:49.424311 56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:20:49.424354 56262 main.go:141] libmachine: (ha-671000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid
I0505 14:20:49.426302 56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid 55694 missing from process table
I0505 14:20:49.426313 56262 main.go:141] libmachine: (ha-671000) DBG | pid 55694 is in state "Stopped"
I0505 14:20:49.426344 56262 main.go:141] libmachine: (ha-671000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid...
I0505 14:20:49.426771 56262 main.go:141] libmachine: (ha-671000) DBG | Using UUID 9389e317-b0a3-4e2d-8cc9-aa1a138ddf96
I0505 14:20:49.551381 56262 main.go:141] libmachine: (ha-671000) DBG | Generated MAC 72:52:a3:7d:5c:d1
I0505 14:20:49.551411 56262 main.go:141] libmachine: (ha-671000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
I0505 14:20:49.551646 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9389e317-b0a3-4e2d-8cc9-aa1a138ddf96", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037f290)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0505 14:20:49.551692 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9389e317-b0a3-4e2d-8cc9-aa1a138ddf96", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037f290)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0505 14:20:49.551780 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9389e317-b0a3-4e2d-8cc9-aa1a138ddf96", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/ha-671000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
I0505 14:20:49.551846 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9389e317-b0a3-4e2d-8cc9-aa1a138ddf96 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/ha-671000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
I0505 14:20:49.551864 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0505 14:20:49.553184 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Pid is 56275
I0505 14:20:49.553639 56262 main.go:141] libmachine: (ha-671000) DBG | Attempt 0
I0505 14:20:49.553663 56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:20:49.553735 56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 56275
I0505 14:20:49.555494 56262 main.go:141] libmachine: (ha-671000) DBG | Searching for 72:52:a3:7d:5c:d1 in /var/db/dhcpd_leases ...
I0505 14:20:49.555595 56262 main.go:141] libmachine: (ha-671000) DBG | Found 53 entries in /var/db/dhcpd_leases!
I0505 14:20:49.555611 56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x6637f817}
I0505 14:20:49.555629 56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394976}
I0505 14:20:49.555648 56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x663948d2}
I0505 14:20:49.555661 56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x66394853}
I0505 14:20:49.555667 56262 main.go:141] libmachine: (ha-671000) DBG | Found match: 72:52:a3:7d:5c:d1
I0505 14:20:49.555674 56262 main.go:141] libmachine: (ha-671000) DBG | IP: 192.169.0.51
I0505 14:20:49.555696 56262 main.go:141] libmachine: (ha-671000) Calling .GetConfigRaw
I0505 14:20:49.556342 56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
I0505 14:20:49.556516 56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
I0505 14:20:49.556975 56262 machine.go:94] provisionDockerMachine start ...
I0505 14:20:49.556985 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:20:49.557119 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:20:49.557222 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:20:49.557336 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:20:49.557465 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:20:49.557602 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:20:49.557742 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:20:49.557972 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.51 22 <nil> <nil>}
I0505 14:20:49.557981 56262 main.go:141] libmachine: About to run SSH command:
hostname
I0505 14:20:49.561305 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0505 14:20:49.617858 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0505 14:20:49.618520 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0505 14:20:49.618541 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0505 14:20:49.618548 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0505 14:20:49.618556 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0505 14:20:50.003923 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0505 14:20:50.003954 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0505 14:20:50.118574 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0505 14:20:50.118591 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0505 14:20:50.118604 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0505 14:20:50.118620 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0505 14:20:50.119491 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0505 14:20:50.119502 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0505 14:20:55.386088 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0505 14:20:55.386105 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0505 14:20:55.386124 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0505 14:20:55.410129 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
I0505 14:20:59.165992 56262 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.51:22: connect: connection refused
I0505 14:21:02.226047 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0505 14:21:02.226063 56262 main.go:141] libmachine: (ha-671000) Calling .GetMachineName
I0505 14:21:02.226198 56262 buildroot.go:166] provisioning hostname "ha-671000"
I0505 14:21:02.226208 56262 main.go:141] libmachine: (ha-671000) Calling .GetMachineName
I0505 14:21:02.226303 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:02.226392 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:02.226492 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.226582 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.226673 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:02.226801 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:02.226937 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.51 22 <nil> <nil>}
I0505 14:21:02.226945 56262 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-671000 && echo "ha-671000" | sudo tee /etc/hostname
I0505 14:21:02.297369 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000
I0505 14:21:02.297395 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:02.297543 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:02.297643 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.297751 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.297848 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:02.297983 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:02.298121 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.51 22 <nil> <nil>}
I0505 14:21:02.298132 56262 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-671000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000/g' /etc/hosts;
else
echo '127.0.1.1 ha-671000' | sudo tee -a /etc/hosts;
fi
fi
I0505 14:21:02.363709 56262 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0505 14:21:02.363736 56262 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
I0505 14:21:02.363757 56262 buildroot.go:174] setting up certificates
I0505 14:21:02.363764 56262 provision.go:84] configureAuth start
I0505 14:21:02.363771 56262 main.go:141] libmachine: (ha-671000) Calling .GetMachineName
I0505 14:21:02.363911 56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
I0505 14:21:02.364012 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:02.364108 56262 provision.go:143] copyHostCerts
I0505 14:21:02.364139 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
I0505 14:21:02.364208 56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
I0505 14:21:02.364216 56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
I0505 14:21:02.364363 56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
I0505 14:21:02.364576 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
I0505 14:21:02.364616 56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
I0505 14:21:02.364621 56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
I0505 14:21:02.364702 56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
I0505 14:21:02.364858 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
I0505 14:21:02.364899 56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
I0505 14:21:02.364904 56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
I0505 14:21:02.364979 56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
I0505 14:21:02.365133 56262 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000 san=[127.0.0.1 192.169.0.51 ha-671000 localhost minikube]
I0505 14:21:02.566783 56262 provision.go:177] copyRemoteCerts
I0505 14:21:02.566851 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0505 14:21:02.566867 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:02.567002 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:02.567081 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.567166 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:02.567249 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
I0505 14:21:02.603993 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0505 14:21:02.604064 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0505 14:21:02.623864 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
I0505 14:21:02.623931 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0505 14:21:02.642984 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0505 14:21:02.643054 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0505 14:21:02.662651 56262 provision.go:87] duration metric: took 298.874135ms to configureAuth
I0505 14:21:02.662663 56262 buildroot.go:189] setting minikube options for container-runtime
I0505 14:21:02.662832 56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:21:02.662845 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:21:02.662976 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:02.663065 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:02.663164 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.663269 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.663357 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:02.663467 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:02.663594 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.51 22 <nil> <nil>}
I0505 14:21:02.663602 56262 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0505 14:21:02.721847 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0505 14:21:02.721864 56262 buildroot.go:70] root file system type: tmpfs
I0505 14:21:02.721944 56262 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0505 14:21:02.721957 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:02.722094 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:02.722182 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.722290 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.722379 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:02.722504 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:02.722641 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.51 22 <nil> <nil>}
I0505 14:21:02.722685 56262 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0505 14:21:02.791477 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0505 14:21:02.791499 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:02.791628 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:02.791713 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.791806 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.791895 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:02.792000 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:02.792138 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.51 22 <nil> <nil>}
I0505 14:21:02.792148 56262 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0505 14:21:04.463791 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0505 14:21:04.463805 56262 machine.go:97] duration metric: took 14.90688888s to provisionDockerMachine
I0505 14:21:04.463814 56262 start.go:293] postStartSetup for "ha-671000" (driver="hyperkit")
I0505 14:21:04.463821 56262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0505 14:21:04.463832 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:21:04.464011 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0505 14:21:04.464034 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:04.464144 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:04.464235 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:04.464343 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:04.464431 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
I0505 14:21:04.510297 56262 ssh_runner.go:195] Run: cat /etc/os-release
I0505 14:21:04.514333 56262 info.go:137] Remote host: Buildroot 2023.02.9
I0505 14:21:04.514346 56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
I0505 14:21:04.514446 56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
I0505 14:21:04.514637 56262 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
I0505 14:21:04.514644 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
I0505 14:21:04.514851 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0505 14:21:04.528097 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
I0505 14:21:04.557607 56262 start.go:296] duration metric: took 93.785206ms for postStartSetup
I0505 14:21:04.557630 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:21:04.557802 56262 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0505 14:21:04.557815 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:04.557914 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:04.558026 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:04.558104 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:04.558180 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
I0505 14:21:04.595384 56262 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0505 14:21:04.595439 56262 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0505 14:21:04.627954 56262 fix.go:56] duration metric: took 15.279298664s for fixHost
I0505 14:21:04.627978 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:04.628106 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:04.628210 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:04.628316 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:04.628400 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:04.628519 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:04.628664 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.51 22 <nil> <nil>}
I0505 14:21:04.628671 56262 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0505 14:21:04.687788 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944064.851392424
I0505 14:21:04.687801 56262 fix.go:216] guest clock: 1714944064.851392424
I0505 14:21:04.687806 56262 fix.go:229] Guest: 2024-05-05 14:21:04.851392424 -0700 PDT Remote: 2024-05-05 14:21:04.627967 -0700 PDT m=+15.708271847 (delta=223.425424ms)
I0505 14:21:04.687822 56262 fix.go:200] guest clock delta is within tolerance: 223.425424ms
I0505 14:21:04.687828 56262 start.go:83] releasing machines lock for "ha-671000", held for 15.339229169s
I0505 14:21:04.687844 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:21:04.687975 56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
I0505 14:21:04.688073 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:21:04.688362 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:21:04.688461 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:21:04.688537 56262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0505 14:21:04.688563 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:04.688585 56262 ssh_runner.go:195] Run: cat /version.json
I0505 14:21:04.688594 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:04.688666 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:04.688681 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:04.688776 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:04.688794 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:04.688857 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:04.688870 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:04.688932 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
I0505 14:21:04.688951 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
I0505 14:21:04.773179 56262 ssh_runner.go:195] Run: systemctl --version
I0505 14:21:04.778074 56262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0505 14:21:04.782225 56262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0505 14:21:04.782267 56262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0505 14:21:04.795505 56262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0505 14:21:04.795515 56262 start.go:494] detecting cgroup driver to use...
I0505 14:21:04.795626 56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0505 14:21:04.813193 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0505 14:21:04.822043 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0505 14:21:04.830859 56262 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0505 14:21:04.830912 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0505 14:21:04.839650 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0505 14:21:04.848348 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0505 14:21:04.857332 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0505 14:21:04.866100 56262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0505 14:21:04.874955 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0505 14:21:04.883995 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0505 14:21:04.892686 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0505 14:21:04.901641 56262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0505 14:21:04.909531 56262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0505 14:21:04.917434 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:05.025345 56262 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0505 14:21:05.045401 56262 start.go:494] detecting cgroup driver to use...
I0505 14:21:05.045483 56262 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0505 14:21:05.056970 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0505 14:21:05.067558 56262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0505 14:21:05.082472 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0505 14:21:05.093595 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0505 14:21:05.104660 56262 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0505 14:21:05.123434 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0505 14:21:05.136644 56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0505 14:21:05.151834 56262 ssh_runner.go:195] Run: which cri-dockerd
I0505 14:21:05.154642 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0505 14:21:05.162375 56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0505 14:21:05.175761 56262 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0505 14:21:05.270844 56262 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0505 14:21:05.375810 56262 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0505 14:21:05.375883 56262 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0505 14:21:05.390245 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:05.495960 56262 ssh_runner.go:195] Run: sudo systemctl restart docker
I0505 14:21:07.797662 56262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.301692609s)
I0505 14:21:07.797733 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0505 14:21:07.809357 56262 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I0505 14:21:07.822066 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0505 14:21:07.832350 56262 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0505 14:21:07.930252 56262 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0505 14:21:08.029360 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:08.124190 56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0505 14:21:08.137986 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0505 14:21:08.149027 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:08.258895 56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0505 14:21:08.326102 56262 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0505 14:21:08.326177 56262 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0505 14:21:08.330736 56262 start.go:562] Will wait 60s for crictl version
I0505 14:21:08.330787 56262 ssh_runner.go:195] Run: which crictl
I0505 14:21:08.333926 56262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0505 14:21:08.360867 56262 start.go:578] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 26.0.2
RuntimeApiVersion: v1
I0505 14:21:08.360957 56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0505 14:21:08.380536 56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0505 14:21:08.444390 56262 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
I0505 14:21:08.444441 56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
I0505 14:21:08.444833 56262 ssh_runner.go:195] Run: grep 192.169.0.1 host.minikube.internal$ /etc/hosts
I0505 14:21:08.449245 56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0505 14:21:08.459088 56262 kubeadm.go:877] updating cluster {Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:fal
se freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0505 14:21:08.459178 56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0505 14:21:08.459237 56262 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0505 14:21:08.472336 56262 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
ghcr.io/kube-vip/kube-vip:v0.7.1
registry.k8s.io/etcd:3.5.12-0
kindest/kindnetd:v20240202-8f1494ea
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0505 14:21:08.472348 56262 docker.go:615] Images already preloaded, skipping extraction
I0505 14:21:08.472419 56262 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0505 14:21:08.484264 56262 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
ghcr.io/kube-vip/kube-vip:v0.7.1
registry.k8s.io/etcd:3.5.12-0
kindest/kindnetd:v20240202-8f1494ea
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0505 14:21:08.484284 56262 cache_images.go:84] Images are preloaded, skipping loading
I0505 14:21:08.484299 56262 kubeadm.go:928] updating node { 192.169.0.51 8443 v1.30.0 docker true true} ...
I0505 14:21:08.484375 56262 kubeadm.go:940] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-671000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.51
[Install]
config:
{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0505 14:21:08.484439 56262 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0505 14:21:08.500967 56262 cni.go:84] Creating CNI manager for ""
I0505 14:21:08.500979 56262 cni.go:136] multinode detected (4 nodes found), recommending kindnet
I0505 14:21:08.500990 56262 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0505 14:21:08.501005 56262 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.51 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-671000 NodeName:ha-671000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0505 14:21:08.501088 56262 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.169.0.51
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ha-671000"
kubeletExtraArgs:
node-ip: 192.169.0.51
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.169.0.51"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0505 14:21:08.501113 56262 kube-vip.go:111] generating kube-vip config ...
I0505 14:21:08.501162 56262 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0505 14:21:08.513119 56262 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
I0505 14:21:08.513193 56262 kube-vip.go:133] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.169.0.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.7.1
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0505 14:21:08.513250 56262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
I0505 14:21:08.521487 56262 binaries.go:44] Found k8s binaries, skipping transfer
I0505 14:21:08.521531 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
I0505 14:21:08.528952 56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
I0505 14:21:08.542487 56262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0505 14:21:08.556157 56262 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
I0505 14:21:08.570110 56262 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
I0505 14:21:08.584111 56262 ssh_runner.go:195] Run: grep 192.169.0.254 control-plane.minikube.internal$ /etc/hosts
I0505 14:21:08.586992 56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0505 14:21:08.596597 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:08.710024 56262 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0505 14:21:08.724251 56262 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000 for IP: 192.169.0.51
I0505 14:21:08.724262 56262 certs.go:194] generating shared ca certs ...
I0505 14:21:08.724272 56262 certs.go:226] acquiring lock for ca certs: {Name:mk4a4c4cb11dfd06f304e9c6007de9e5e149a466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0505 14:21:08.724457 56262 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key
I0505 14:21:08.724528 56262 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key
I0505 14:21:08.724539 56262 certs.go:256] generating profile certs ...
I0505 14:21:08.724648 56262 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key
I0505 14:21:08.724671 56262 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190
I0505 14:21:08.724686 56262 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.51 192.169.0.52 192.169.0.53 192.169.0.254]
I0505 14:21:08.826095 56262 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190 ...
I0505 14:21:08.826111 56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190: {Name:mk26b58616f2e9bcce56069037dda85d1d8c350c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0505 14:21:08.826754 56262 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190 ...
I0505 14:21:08.826765 56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190: {Name:mk7fc32008d240a4b7e6cb64bdeb1f596430582b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0505 14:21:08.826983 56262 certs.go:381] copying /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt
I0505 14:21:08.827192 56262 certs.go:385] copying /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key
I0505 14:21:08.827434 56262 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key
I0505 14:21:08.827443 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0505 14:21:08.827466 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0505 14:21:08.827487 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0505 14:21:08.827506 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0505 14:21:08.827523 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0505 14:21:08.827541 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0505 14:21:08.827559 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0505 14:21:08.827576 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0505 14:21:08.827667 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem (1338 bytes)
W0505 14:21:08.827718 56262 certs.go:480] ignoring /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210_empty.pem, impossibly tiny 0 bytes
I0505 14:21:08.827726 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem (1675 bytes)
I0505 14:21:08.827758 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem (1078 bytes)
I0505 14:21:08.827791 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem (1123 bytes)
I0505 14:21:08.827822 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem (1679 bytes)
I0505 14:21:08.827892 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem (1708 bytes)
I0505 14:21:08.827924 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0505 14:21:08.827970 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem -> /usr/share/ca-certificates/54210.pem
I0505 14:21:08.827988 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /usr/share/ca-certificates/542102.pem
I0505 14:21:08.828425 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0505 14:21:08.851250 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0505 14:21:08.872963 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0505 14:21:08.895079 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0505 14:21:08.922893 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
I0505 14:21:08.953937 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0505 14:21:08.983911 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0505 14:21:09.023252 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0505 14:21:09.070795 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0505 14:21:09.113576 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem --> /usr/share/ca-certificates/54210.pem (1338 bytes)
I0505 14:21:09.150037 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /usr/share/ca-certificates/542102.pem (1708 bytes)
I0505 14:21:09.170089 56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0505 14:21:09.184262 56262 ssh_runner.go:195] Run: openssl version
I0505 14:21:09.188637 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/542102.pem && ln -fs /usr/share/ca-certificates/542102.pem /etc/ssl/certs/542102.pem"
I0505 14:21:09.197186 56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/542102.pem
I0505 14:21:09.200763 56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 5 21:08 /usr/share/ca-certificates/542102.pem
I0505 14:21:09.200802 56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/542102.pem
I0505 14:21:09.205113 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/542102.pem /etc/ssl/certs/3ec20f2e.0"
I0505 14:21:09.213846 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0505 14:21:09.222459 56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0505 14:21:09.225992 56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 5 20:59 /usr/share/ca-certificates/minikubeCA.pem
I0505 14:21:09.226036 56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0505 14:21:09.230212 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0505 14:21:09.238744 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54210.pem && ln -fs /usr/share/ca-certificates/54210.pem /etc/ssl/certs/54210.pem"
I0505 14:21:09.247131 56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54210.pem
I0505 14:21:09.250641 56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 5 21:08 /usr/share/ca-certificates/54210.pem
I0505 14:21:09.250684 56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54210.pem
I0505 14:21:09.254933 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/54210.pem /etc/ssl/certs/51391683.0"
I0505 14:21:09.263283 56262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0505 14:21:09.266913 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0505 14:21:09.271690 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0505 14:21:09.276202 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0505 14:21:09.280723 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0505 14:21:09.285120 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0505 14:21:09.289468 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0505 14:21:09.293767 56262 kubeadm.go:391] StartCluster: {Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false
freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0505 14:21:09.293893 56262 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0505 14:21:09.305167 56262 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
W0505 14:21:09.312937 56262 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
I0505 14:21:09.312947 56262 kubeadm.go:407] found existing configuration files, will attempt cluster restart
I0505 14:21:09.312965 56262 kubeadm.go:587] restartPrimaryControlPlane start ...
I0505 14:21:09.313010 56262 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0505 14:21:09.320777 56262 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0505 14:21:09.321098 56262 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-671000" does not appear in /Users/jenkins/minikube-integration/18602-53665/kubeconfig
I0505 14:21:09.321183 56262 kubeconfig.go:62] /Users/jenkins/minikube-integration/18602-53665/kubeconfig needs updating (will repair): [kubeconfig missing "ha-671000" cluster setting kubeconfig missing "ha-671000" context setting]
I0505 14:21:09.321347 56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/kubeconfig: {Name:mk07bec02cc3957a2a8800c4412eef88581455ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0505 14:21:09.321996 56262 loader.go:395] Config loaded from file: /Users/jenkins/minikube-integration/18602-53665/kubeconfig
I0505 14:21:09.322179 56262 kapi.go:59] client config for ha-671000: &rest.Config{Host:"https://192.169.0.51:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x6257220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0505 14:21:09.322483 56262 cert_rotation.go:137] Starting client certificate rotation controller
I0505 14:21:09.322660 56262 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0505 14:21:09.330103 56262 kubeadm.go:624] The running cluster does not require reconfiguration: 192.169.0.51
I0505 14:21:09.330115 56262 kubeadm.go:591] duration metric: took 17.1285ms to restartPrimaryControlPlane
I0505 14:21:09.330120 56262 kubeadm.go:393] duration metric: took 36.320628ms to StartCluster
I0505 14:21:09.330127 56262 settings.go:142] acquiring lock: {Name:mk42961bbb846d74d4f3eb396c3a07b16222feb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0505 14:21:09.330217 56262 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/18602-53665/kubeconfig
I0505 14:21:09.330637 56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/kubeconfig: {Name:mk07bec02cc3957a2a8800c4412eef88581455ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0505 14:21:09.330863 56262 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0505 14:21:09.330875 56262 start.go:240] waiting for startup goroutines ...
I0505 14:21:09.330887 56262 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0505 14:21:09.373046 56262 out.go:177] * Enabled addons:
I0505 14:21:09.331023 56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:21:09.395270 56262 addons.go:510] duration metric: took 64.318856ms for enable addons: enabled=[]
I0505 14:21:09.395388 56262 start.go:245] waiting for cluster config update ...
I0505 14:21:09.395406 56262 start.go:254] writing updated cluster config ...
I0505 14:21:09.418289 56262 out.go:177]
I0505 14:21:09.439589 56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:21:09.439723 56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
I0505 14:21:09.462158 56262 out.go:177] * Starting "ha-671000-m02" control-plane node in "ha-671000" cluster
I0505 14:21:09.504016 56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0505 14:21:09.504076 56262 cache.go:56] Caching tarball of preloaded images
I0505 14:21:09.504246 56262 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0505 14:21:09.504264 56262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0505 14:21:09.504398 56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
I0505 14:21:09.505447 56262 start.go:360] acquireMachinesLock for ha-671000-m02: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0505 14:21:09.505557 56262 start.go:364] duration metric: took 85.865µs to acquireMachinesLock for "ha-671000-m02"
I0505 14:21:09.505582 56262 start.go:96] Skipping create...Using existing machine configuration
I0505 14:21:09.505589 56262 fix.go:54] fixHost starting: m02
I0505 14:21:09.506042 56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:21:09.506080 56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:21:09.515413 56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57896
I0505 14:21:09.515746 56262 main.go:141] libmachine: () Calling .GetVersion
I0505 14:21:09.516119 56262 main.go:141] libmachine: Using API Version 1
I0505 14:21:09.516136 56262 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:21:09.516414 56262 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:21:09.516555 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
I0505 14:21:09.516655 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetState
I0505 14:21:09.516736 56262 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:21:09.516805 56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid from json: 56210
I0505 14:21:09.517744 56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid 56210 missing from process table
I0505 14:21:09.517764 56262 fix.go:112] recreateIfNeeded on ha-671000-m02: state=Stopped err=<nil>
I0505 14:21:09.517774 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
W0505 14:21:09.517855 56262 fix.go:138] unexpected machine state, will restart: <nil>
I0505 14:21:09.539362 56262 out.go:177] * Restarting existing hyperkit VM for "ha-671000-m02" ...
I0505 14:21:09.581177 56262 main.go:141] libmachine: (ha-671000-m02) Calling .Start
I0505 14:21:09.581513 56262 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:21:09.581582 56262 main.go:141] libmachine: (ha-671000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid
I0505 14:21:09.583319 56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid 56210 missing from process table
I0505 14:21:09.583336 56262 main.go:141] libmachine: (ha-671000-m02) DBG | pid 56210 is in state "Stopped"
I0505 14:21:09.583361 56262 main.go:141] libmachine: (ha-671000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid...
I0505 14:21:09.583762 56262 main.go:141] libmachine: (ha-671000-m02) DBG | Using UUID 294bfc97-3e6f-4d68-b3f3-54381951a5e8
I0505 14:21:09.611765 56262 main.go:141] libmachine: (ha-671000-m02) DBG | Generated MAC 92:83:2c:36:f7:7d
I0505 14:21:09.611789 56262 main.go:141] libmachine: (ha-671000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
I0505 14:21:09.611924 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"294bfc97-3e6f-4d68-b3f3-54381951a5e8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b3e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0505 14:21:09.611964 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"294bfc97-3e6f-4d68-b3f3-54381951a5e8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b3e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0505 14:21:09.612015 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "294bfc97-3e6f-4d68-b3f3-54381951a5e8", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/ha-671000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/
machines/ha-671000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
I0505 14:21:09.612064 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 294bfc97-3e6f-4d68-b3f3-54381951a5e8 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/ha-671000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
I0505 14:21:09.612079 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0505 14:21:09.613498 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Pid is 56285
I0505 14:21:09.613935 56262 main.go:141] libmachine: (ha-671000-m02) DBG | Attempt 0
I0505 14:21:09.613949 56262 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:21:09.614012 56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid from json: 56285
I0505 14:21:09.615713 56262 main.go:141] libmachine: (ha-671000-m02) DBG | Searching for 92:83:2c:36:f7:7d in /var/db/dhcpd_leases ...
I0505 14:21:09.615841 56262 main.go:141] libmachine: (ha-671000-m02) DBG | Found 53 entries in /var/db/dhcpd_leases!
I0505 14:21:09.615860 56262 main.go:141] libmachine: (ha-671000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x663949ba}
I0505 14:21:09.615883 56262 main.go:141] libmachine: (ha-671000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x6637f817}
I0505 14:21:09.615897 56262 main.go:141] libmachine: (ha-671000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394976}
I0505 14:21:09.615905 56262 main.go:141] libmachine: (ha-671000-m02) DBG | Found match: 92:83:2c:36:f7:7d
I0505 14:21:09.615916 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetConfigRaw
I0505 14:21:09.615920 56262 main.go:141] libmachine: (ha-671000-m02) DBG | IP: 192.169.0.52
I0505 14:21:09.616579 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
I0505 14:21:09.616779 56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
I0505 14:21:09.617318 56262 machine.go:94] provisionDockerMachine start ...
I0505 14:21:09.617329 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
I0505 14:21:09.617443 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:09.617536 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:09.617633 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:09.617737 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:09.617836 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:09.617968 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:09.618123 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.52 22 <nil> <nil>}
I0505 14:21:09.618132 56262 main.go:141] libmachine: About to run SSH command:
hostname
I0505 14:21:09.621348 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0505 14:21:09.630281 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0505 14:21:09.631193 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0505 14:21:09.631218 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0505 14:21:09.631230 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0505 14:21:09.631252 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0505 14:21:10.019586 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0505 14:21:10.019603 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0505 14:21:10.134248 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0505 14:21:10.134266 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0505 14:21:10.134281 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0505 14:21:10.134292 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0505 14:21:10.135185 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0505 14:21:10.135199 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0505 14:21:15.419942 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0505 14:21:15.419970 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0505 14:21:15.419978 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0505 14:21:15.445269 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
I0505 14:21:20.698093 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0505 14:21:20.698110 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetMachineName
I0505 14:21:20.698266 56262 buildroot.go:166] provisioning hostname "ha-671000-m02"
I0505 14:21:20.698277 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetMachineName
I0505 14:21:20.698366 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:20.698443 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:20.698518 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:20.698602 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:20.698696 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:20.698824 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:20.698977 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.52 22 <nil> <nil>}
I0505 14:21:20.698987 56262 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-671000-m02 && echo "ha-671000-m02" | sudo tee /etc/hostname
I0505 14:21:20.773304 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000-m02
I0505 14:21:20.773319 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:20.773451 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:20.773547 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:20.773625 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:20.773710 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:20.773837 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:20.773989 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.52 22 <nil> <nil>}
I0505 14:21:20.774000 56262 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-671000-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000-m02/g' /etc/hosts;
else
echo '127.0.1.1 ha-671000-m02' | sudo tee -a /etc/hosts;
fi
fi
I0505 14:21:20.846506 56262 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0505 14:21:20.846523 56262 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
I0505 14:21:20.846532 56262 buildroot.go:174] setting up certificates
I0505 14:21:20.846537 56262 provision.go:84] configureAuth start
I0505 14:21:20.846545 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetMachineName
I0505 14:21:20.846678 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
I0505 14:21:20.846753 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:20.846822 56262 provision.go:143] copyHostCerts
I0505 14:21:20.846847 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
I0505 14:21:20.846900 56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
I0505 14:21:20.846906 56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
I0505 14:21:20.847106 56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
I0505 14:21:20.847298 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
I0505 14:21:20.847327 56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
I0505 14:21:20.847332 56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
I0505 14:21:20.847414 56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
I0505 14:21:20.847555 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
I0505 14:21:20.847584 56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
I0505 14:21:20.847588 56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
I0505 14:21:20.847657 56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
I0505 14:21:20.847808 56262 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000-m02 san=[127.0.0.1 192.169.0.52 ha-671000-m02 localhost minikube]
I0505 14:21:20.923054 56262 provision.go:177] copyRemoteCerts
I0505 14:21:20.923102 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0505 14:21:20.923114 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:20.923242 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:20.923344 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:20.923432 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:20.923508 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
I0505 14:21:20.963007 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0505 14:21:20.963079 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0505 14:21:20.982214 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
I0505 14:21:20.982293 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0505 14:21:21.001587 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0505 14:21:21.001658 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0505 14:21:21.020765 56262 provision.go:87] duration metric: took 174.141582ms to configureAuth
I0505 14:21:21.020780 56262 buildroot.go:189] setting minikube options for container-runtime
I0505 14:21:21.020945 56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:21:21.020958 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
I0505 14:21:21.021085 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:21.021186 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:21.021280 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:21.021382 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:21.021493 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:21.021630 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:21.021764 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.52 22 <nil> <nil>}
I0505 14:21:21.021777 56262 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0505 14:21:21.088593 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0505 14:21:21.088605 56262 buildroot.go:70] root file system type: tmpfs
I0505 14:21:21.088686 56262 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0505 14:21:21.088698 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:21.088827 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:21.088944 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:21.089047 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:21.089155 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:21.089299 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:21.089434 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.52 22 <nil> <nil>}
I0505 14:21:21.089481 56262 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.169.0.51"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0505 14:21:21.165319 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.169.0.51
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0505 14:21:21.165336 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:21.165469 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:21.165561 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:21.165660 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:21.165755 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:21.165892 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:21.166034 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.52 22 <nil> <nil>}
I0505 14:21:21.166046 56262 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0505 14:21:22.810399 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0505 14:21:22.810414 56262 machine.go:97] duration metric: took 13.184745912s to provisionDockerMachine
I0505 14:21:22.810422 56262 start.go:293] postStartSetup for "ha-671000-m02" (driver="hyperkit")
I0505 14:21:22.810435 56262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0505 14:21:22.810448 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
I0505 14:21:22.810630 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0505 14:21:22.810642 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:22.810731 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:22.810813 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:22.810958 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:22.811059 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
I0505 14:21:22.854108 56262 ssh_runner.go:195] Run: cat /etc/os-release
I0505 14:21:22.857587 56262 info.go:137] Remote host: Buildroot 2023.02.9
I0505 14:21:22.857599 56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
I0505 14:21:22.857687 56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
I0505 14:21:22.857827 56262 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
I0505 14:21:22.857833 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
I0505 14:21:22.857984 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0505 14:21:22.870076 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
I0505 14:21:22.896680 56262 start.go:296] duration metric: took 86.209325ms for postStartSetup
I0505 14:21:22.896713 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
I0505 14:21:22.896900 56262 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0505 14:21:22.896916 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:22.897010 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:22.897116 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:22.897207 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:22.897282 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
I0505 14:21:22.937842 56262 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0505 14:21:22.937898 56262 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0505 14:21:22.971365 56262 fix.go:56] duration metric: took 13.45726146s for fixHost
I0505 14:21:22.971396 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:22.971537 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:22.971639 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:22.971717 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:22.971804 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:22.971961 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:22.972106 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.52 22 <nil> <nil>}
I0505 14:21:22.972117 56262 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0505 14:21:23.038093 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944083.052286945
I0505 14:21:23.038109 56262 fix.go:216] guest clock: 1714944083.052286945
I0505 14:21:23.038115 56262 fix.go:229] Guest: 2024-05-05 14:21:23.052286945 -0700 PDT Remote: 2024-05-05 14:21:22.971379 -0700 PDT m=+34.042274957 (delta=80.907945ms)
I0505 14:21:23.038125 56262 fix.go:200] guest clock delta is within tolerance: 80.907945ms
I0505 14:21:23.038129 56262 start.go:83] releasing machines lock for "ha-671000-m02", held for 13.524025366s
I0505 14:21:23.038145 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
I0505 14:21:23.038286 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
I0505 14:21:23.061518 56262 out.go:177] * Found network options:
I0505 14:21:23.083843 56262 out.go:177] - NO_PROXY=192.169.0.51
W0505 14:21:23.105432 56262 proxy.go:119] fail to check proxy env: Error ip not in block
I0505 14:21:23.105470 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
I0505 14:21:23.106334 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
I0505 14:21:23.106599 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
I0505 14:21:23.106711 56262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0505 14:21:23.106753 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
W0505 14:21:23.106918 56262 proxy.go:119] fail to check proxy env: Error ip not in block
I0505 14:21:23.107013 56262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0505 14:21:23.107023 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:23.107033 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:23.107244 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:23.107275 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:23.107414 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:23.107468 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:23.107556 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:23.107590 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
I0505 14:21:23.107700 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
W0505 14:21:23.143066 56262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0505 14:21:23.143128 56262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0505 14:21:23.312270 56262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0505 14:21:23.312288 56262 start.go:494] detecting cgroup driver to use...
I0505 14:21:23.312377 56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0505 14:21:23.327567 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0505 14:21:23.336186 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0505 14:21:23.344528 56262 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0505 14:21:23.344575 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0505 14:21:23.352890 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0505 14:21:23.361005 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0505 14:21:23.369046 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0505 14:21:23.377280 56262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0505 14:21:23.385827 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0505 14:21:23.394012 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0505 14:21:23.402113 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0505 14:21:23.410536 56262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0505 14:21:23.418126 56262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0505 14:21:23.425500 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:23.526138 56262 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0505 14:21:23.544818 56262 start.go:494] detecting cgroup driver to use...
I0505 14:21:23.544892 56262 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0505 14:21:23.559895 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0505 14:21:23.572081 56262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0505 14:21:23.584840 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0505 14:21:23.595478 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0505 14:21:23.606028 56262 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0505 14:21:23.632278 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0505 14:21:23.643848 56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0505 14:21:23.658675 56262 ssh_runner.go:195] Run: which cri-dockerd
I0505 14:21:23.661665 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0505 14:21:23.669850 56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0505 14:21:23.683220 56262 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0505 14:21:23.786303 56262 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0505 14:21:23.893788 56262 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0505 14:21:23.893809 56262 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0505 14:21:23.908293 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:24.010074 56262 ssh_runner.go:195] Run: sudo systemctl restart docker
I0505 14:21:26.298709 56262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.287835945s)
I0505 14:21:26.298771 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0505 14:21:26.310190 56262 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I0505 14:21:26.324652 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0505 14:21:26.336377 56262 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0505 14:21:26.435974 56262 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0505 14:21:26.534723 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:26.647643 56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0505 14:21:26.661375 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0505 14:21:26.672706 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:26.778709 56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0505 14:21:26.840618 56262 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0505 14:21:26.840697 56262 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0505 14:21:26.844919 56262 start.go:562] Will wait 60s for crictl version
I0505 14:21:26.844974 56262 ssh_runner.go:195] Run: which crictl
I0505 14:21:26.849165 56262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0505 14:21:26.874329 56262 start.go:578] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 26.0.2
RuntimeApiVersion: v1
I0505 14:21:26.874403 56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0505 14:21:26.890208 56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0505 14:21:26.929797 56262 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
I0505 14:21:26.949648 56262 out.go:177] - env NO_PROXY=192.169.0.51
I0505 14:21:26.970782 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
I0505 14:21:26.971166 56262 ssh_runner.go:195] Run: grep 192.169.0.1 host.minikube.internal$ /etc/hosts
I0505 14:21:26.975958 56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0505 14:21:26.985550 56262 mustload.go:65] Loading cluster: ha-671000
I0505 14:21:26.985727 56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:21:26.985939 56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:21:26.985954 56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:21:26.994516 56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57918
I0505 14:21:26.994869 56262 main.go:141] libmachine: () Calling .GetVersion
I0505 14:21:26.995203 56262 main.go:141] libmachine: Using API Version 1
I0505 14:21:26.995220 56262 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:21:26.995417 56262 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:21:26.995536 56262 main.go:141] libmachine: (ha-671000) Calling .GetState
I0505 14:21:26.995629 56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:21:26.995703 56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 56275
I0505 14:21:26.996652 56262 host.go:66] Checking if "ha-671000" exists ...
I0505 14:21:26.996892 56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:21:26.996917 56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:21:27.005463 56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57920
I0505 14:21:27.005786 56262 main.go:141] libmachine: () Calling .GetVersion
I0505 14:21:27.006124 56262 main.go:141] libmachine: Using API Version 1
I0505 14:21:27.006142 56262 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:21:27.006378 56262 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:21:27.006493 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:21:27.006597 56262 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000 for IP: 192.169.0.52
I0505 14:21:27.006603 56262 certs.go:194] generating shared ca certs ...
I0505 14:21:27.006614 56262 certs.go:226] acquiring lock for ca certs: {Name:mk4a4c4cb11dfd06f304e9c6007de9e5e149a466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0505 14:21:27.006755 56262 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key
I0505 14:21:27.006813 56262 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key
I0505 14:21:27.006821 56262 certs.go:256] generating profile certs ...
I0505 14:21:27.006913 56262 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key
I0505 14:21:27.006999 56262 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e823369f
I0505 14:21:27.007048 56262 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key
I0505 14:21:27.007055 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0505 14:21:27.007075 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0505 14:21:27.007095 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0505 14:21:27.007113 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0505 14:21:27.007130 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0505 14:21:27.007151 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0505 14:21:27.007170 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0505 14:21:27.007187 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0505 14:21:27.007262 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem (1338 bytes)
W0505 14:21:27.007299 56262 certs.go:480] ignoring /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210_empty.pem, impossibly tiny 0 bytes
I0505 14:21:27.007308 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem (1675 bytes)
I0505 14:21:27.007341 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem (1078 bytes)
I0505 14:21:27.007375 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem (1123 bytes)
I0505 14:21:27.007408 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem (1679 bytes)
I0505 14:21:27.007476 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem (1708 bytes)
I0505 14:21:27.007517 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /usr/share/ca-certificates/542102.pem
I0505 14:21:27.007538 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0505 14:21:27.007556 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem -> /usr/share/ca-certificates/54210.pem
I0505 14:21:27.007581 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:27.007663 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:27.007746 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:27.007820 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:27.007907 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
I0505 14:21:27.036107 56262 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
I0505 14:21:27.039382 56262 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
I0505 14:21:27.047195 56262 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
I0505 14:21:27.050362 56262 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
I0505 14:21:27.058524 56262 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
I0505 14:21:27.061585 56262 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
I0505 14:21:27.069461 56262 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
I0505 14:21:27.072439 56262 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
I0505 14:21:27.080982 56262 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
I0505 14:21:27.084070 56262 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
I0505 14:21:27.092062 56262 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
I0505 14:21:27.095149 56262 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
I0505 14:21:27.103105 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0505 14:21:27.123887 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0505 14:21:27.144018 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0505 14:21:27.164034 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0505 14:21:27.183960 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
I0505 14:21:27.204170 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0505 14:21:27.224085 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0505 14:21:27.244379 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0505 14:21:27.264411 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /usr/share/ca-certificates/542102.pem (1708 bytes)
I0505 14:21:27.283983 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0505 14:21:27.303697 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem --> /usr/share/ca-certificates/54210.pem (1338 bytes)
I0505 14:21:27.323613 56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
I0505 14:21:27.337907 56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
I0505 14:21:27.351842 56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
I0505 14:21:27.365462 56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
I0505 14:21:27.379337 56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
I0505 14:21:27.393337 56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
I0505 14:21:27.406867 56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
I0505 14:21:27.420462 56262 ssh_runner.go:195] Run: openssl version
I0505 14:21:27.425063 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/542102.pem && ln -fs /usr/share/ca-certificates/542102.pem /etc/ssl/certs/542102.pem"
I0505 14:21:27.433747 56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/542102.pem
I0505 14:21:27.437275 56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 5 21:08 /usr/share/ca-certificates/542102.pem
I0505 14:21:27.437314 56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/542102.pem
I0505 14:21:27.441663 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/542102.pem /etc/ssl/certs/3ec20f2e.0"
I0505 14:21:27.450070 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0505 14:21:27.458559 56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0505 14:21:27.462027 56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 5 20:59 /usr/share/ca-certificates/minikubeCA.pem
I0505 14:21:27.462088 56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0505 14:21:27.466402 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0505 14:21:27.474903 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54210.pem && ln -fs /usr/share/ca-certificates/54210.pem /etc/ssl/certs/54210.pem"
I0505 14:21:27.484026 56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54210.pem
I0505 14:21:27.487471 56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 5 21:08 /usr/share/ca-certificates/54210.pem
I0505 14:21:27.487506 56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54210.pem
I0505 14:21:27.491806 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/54210.pem /etc/ssl/certs/51391683.0"
I0505 14:21:27.500356 56262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0505 14:21:27.503912 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0505 14:21:27.508255 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0505 14:21:27.512583 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0505 14:21:27.516997 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0505 14:21:27.521261 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0505 14:21:27.525514 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0505 14:21:27.529849 56262 kubeadm.go:928] updating node {m02 192.169.0.52 8443 v1.30.0 docker true true} ...
I0505 14:21:27.529904 56262 kubeadm.go:940] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-671000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.52
[Install]
config:
{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0505 14:21:27.529918 56262 kube-vip.go:111] generating kube-vip config ...
I0505 14:21:27.529952 56262 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0505 14:21:27.542376 56262 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
I0505 14:21:27.542421 56262 kube-vip.go:133] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.169.0.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.7.1
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0505 14:21:27.542477 56262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
I0505 14:21:27.550208 56262 binaries.go:44] Found k8s binaries, skipping transfer
I0505 14:21:27.550254 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
I0505 14:21:27.557751 56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0505 14:21:27.571295 56262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0505 14:21:27.584791 56262 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
I0505 14:21:27.598438 56262 ssh_runner.go:195] Run: grep 192.169.0.254 control-plane.minikube.internal$ /etc/hosts
I0505 14:21:27.601396 56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0505 14:21:27.610834 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:27.705062 56262 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0505 14:21:27.720000 56262 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0505 14:21:27.761967 56262 out.go:177] * Verifying Kubernetes components...
I0505 14:21:27.720191 56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:21:27.783193 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:27.916127 56262 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0505 14:21:27.937011 56262 loader.go:395] Config loaded from file: /Users/jenkins/minikube-integration/18602-53665/kubeconfig
I0505 14:21:27.937198 56262 kapi.go:59] client config for ha-671000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x6257220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
W0505 14:21:27.937233 56262 kubeadm.go:477] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.51:8443
I0505 14:21:27.937400 56262 node_ready.go:35] waiting up to 6m0s for node "ha-671000-m02" to be "Ready" ...
I0505 14:21:27.937478 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:27.937483 56262 round_trippers.go:469] Request Headers:
I0505 14:21:27.937491 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:27.937495 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.141758 56262 round_trippers.go:574] Response Status: 200 OK in 9202 milliseconds
I0505 14:21:37.151494 56262 node_ready.go:49] node "ha-671000-m02" has status "Ready":"True"
I0505 14:21:37.151510 56262 node_ready.go:38] duration metric: took 9.212150687s for node "ha-671000-m02" to be "Ready" ...
I0505 14:21:37.151520 56262 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0505 14:21:37.151577 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
I0505 14:21:37.151583 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.151590 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.151594 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.191750 56262 round_trippers.go:574] Response Status: 200 OK in 40 milliseconds
I0505 14:21:37.198443 56262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.198500 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
I0505 14:21:37.198504 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.198511 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.198515 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.209480 56262 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
I0505 14:21:37.210158 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:37.210166 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.210172 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.210175 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.218742 56262 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0505 14:21:37.219086 56262 pod_ready.go:92] pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:37.219096 56262 pod_ready.go:81] duration metric: took 20.63356ms for pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.219105 56262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.219148 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kjf54
I0505 14:21:37.219153 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.219162 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.219170 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.221463 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:37.221880 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:37.221889 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.221897 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.221905 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.226727 56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0505 14:21:37.227035 56262 pod_ready.go:92] pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:37.227045 56262 pod_ready.go:81] duration metric: took 7.931899ms for pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.227052 56262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.227120 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000
I0505 14:21:37.227125 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.227131 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.227135 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.228755 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:37.229130 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:37.229137 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.229143 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.229147 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.230595 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:37.230887 56262 pod_ready.go:92] pod "etcd-ha-671000" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:37.230895 56262 pod_ready.go:81] duration metric: took 3.837029ms for pod "etcd-ha-671000" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.230901 56262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.230929 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000-m02
I0505 14:21:37.230934 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.230939 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.230943 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.232448 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:37.232868 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:37.232875 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.232880 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.232887 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.234369 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:37.234695 56262 pod_ready.go:92] pod "etcd-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:37.234704 56262 pod_ready.go:81] duration metric: took 3.797599ms for pod "etcd-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.234710 56262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.234742 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000-m03
I0505 14:21:37.234747 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.234753 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.234760 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.236183 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:37.351671 56262 request.go:629] Waited for 115.086464ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
I0505 14:21:37.351703 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
I0505 14:21:37.351742 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.351749 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.351752 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.353285 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:37.353602 56262 pod_ready.go:92] pod "etcd-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:37.353612 56262 pod_ready.go:81] duration metric: took 118.878942ms for pod "etcd-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.353624 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.551816 56262 request.go:629] Waited for 198.124765ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000
I0505 14:21:37.551893 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000
I0505 14:21:37.551900 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.551906 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.551909 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.554076 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:37.753242 56262 request.go:629] Waited for 198.55091ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:37.753343 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:37.753355 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.753365 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.753371 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.756033 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:37.756647 56262 pod_ready.go:92] pod "kube-apiserver-ha-671000" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:37.756662 56262 pod_ready.go:81] duration metric: took 402.967586ms for pod "kube-apiserver-ha-671000" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.756670 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.952604 56262 request.go:629] Waited for 195.869842ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m02
I0505 14:21:37.952645 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m02
I0505 14:21:37.952654 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.952662 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.952668 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.954903 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:38.151783 56262 request.go:629] Waited for 196.293382ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:38.151830 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:38.151837 56262 round_trippers.go:469] Request Headers:
I0505 14:21:38.151842 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:38.151847 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:38.156373 56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0505 14:21:38.156768 56262 pod_ready.go:92] pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:38.156778 56262 pod_ready.go:81] duration metric: took 400.046736ms for pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
I0505 14:21:38.156785 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
I0505 14:21:38.351807 56262 request.go:629] Waited for 194.95401ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m03
I0505 14:21:38.351854 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m03
I0505 14:21:38.351862 56262 round_trippers.go:469] Request Headers:
I0505 14:21:38.351904 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:38.351908 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:38.354097 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:38.552842 56262 request.go:629] Waited for 198.080217ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
I0505 14:21:38.552968 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
I0505 14:21:38.552980 56262 round_trippers.go:469] Request Headers:
I0505 14:21:38.552990 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:38.552997 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:38.555719 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:38.556135 56262 pod_ready.go:92] pod "kube-apiserver-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:38.556146 56262 pod_ready.go:81] duration metric: took 399.298154ms for pod "kube-apiserver-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
I0505 14:21:38.556153 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000" in "kube-system" namespace to be "Ready" ...
I0505 14:21:38.752061 56262 request.go:629] Waited for 195.828299ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000
I0505 14:21:38.752126 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000
I0505 14:21:38.752135 56262 round_trippers.go:469] Request Headers:
I0505 14:21:38.752148 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:38.752158 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:38.754957 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:38.951929 56262 request.go:629] Waited for 196.315529ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:38.951959 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:38.951964 56262 round_trippers.go:469] Request Headers:
I0505 14:21:38.951969 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:38.951973 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:38.953886 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:38.954275 56262 pod_ready.go:92] pod "kube-controller-manager-ha-671000" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:38.954284 56262 pod_ready.go:81] duration metric: took 398.072724ms for pod "kube-controller-manager-ha-671000" in "kube-system" namespace to be "Ready" ...
I0505 14:21:38.954297 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
I0505 14:21:39.151925 56262 request.go:629] Waited for 197.547759ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:39.152007 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:39.152019 56262 round_trippers.go:469] Request Headers:
I0505 14:21:39.152025 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:39.152029 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:39.157962 56262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0505 14:21:39.352575 56262 request.go:629] Waited for 194.147234ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:39.352619 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:39.352625 56262 round_trippers.go:469] Request Headers:
I0505 14:21:39.352631 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:39.352635 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:39.356708 56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0505 14:21:39.553301 56262 request.go:629] Waited for 97.737035ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:39.553334 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:39.553340 56262 round_trippers.go:469] Request Headers:
I0505 14:21:39.553346 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:39.553351 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:39.555371 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:39.752052 56262 request.go:629] Waited for 196.251955ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:39.752134 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:39.752145 56262 round_trippers.go:469] Request Headers:
I0505 14:21:39.752153 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:39.752158 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:39.754627 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:39.955025 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:39.955059 56262 round_trippers.go:469] Request Headers:
I0505 14:21:39.955067 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:39.955072 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:39.956871 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:40.152049 56262 request.go:629] Waited for 194.641301ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:40.152132 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:40.152171 56262 round_trippers.go:469] Request Headers:
I0505 14:21:40.152184 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:40.152191 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:40.154660 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:40.456022 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:40.456041 56262 round_trippers.go:469] Request Headers:
I0505 14:21:40.456050 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:40.456056 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:40.458617 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:40.552124 56262 request.go:629] Waited for 92.99221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:40.552206 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:40.552212 56262 round_trippers.go:469] Request Headers:
I0505 14:21:40.552220 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:40.552225 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:40.554220 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:40.956144 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:40.956162 56262 round_trippers.go:469] Request Headers:
I0505 14:21:40.956168 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:40.956172 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:40.958759 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:40.959215 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:40.959223 56262 round_trippers.go:469] Request Headers:
I0505 14:21:40.959229 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:40.959232 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:40.960907 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:40.961228 56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
I0505 14:21:41.455646 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:41.455689 56262 round_trippers.go:469] Request Headers:
I0505 14:21:41.455698 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:41.455722 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:41.457872 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:41.458331 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:41.458339 56262 round_trippers.go:469] Request Headers:
I0505 14:21:41.458344 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:41.458355 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:41.460082 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:41.955474 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:41.955516 56262 round_trippers.go:469] Request Headers:
I0505 14:21:41.955524 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:41.955528 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:41.957597 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:41.958178 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:41.958186 56262 round_trippers.go:469] Request Headers:
I0505 14:21:41.958190 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:41.958193 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:41.960269 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:42.454954 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:42.454969 56262 round_trippers.go:469] Request Headers:
I0505 14:21:42.454975 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:42.454978 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:42.456939 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:42.457382 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:42.457390 56262 round_trippers.go:469] Request Headers:
I0505 14:21:42.457395 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:42.457398 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:42.459026 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:42.955443 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:42.955465 56262 round_trippers.go:469] Request Headers:
I0505 14:21:42.955493 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:42.955500 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:42.957908 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:42.958355 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:42.958362 56262 round_trippers.go:469] Request Headers:
I0505 14:21:42.958368 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:42.958371 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:42.959853 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:43.455723 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:43.455776 56262 round_trippers.go:469] Request Headers:
I0505 14:21:43.455798 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:43.455806 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:43.458560 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:43.458997 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:43.459004 56262 round_trippers.go:469] Request Headers:
I0505 14:21:43.459009 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:43.459013 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:43.460509 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:43.460811 56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
I0505 14:21:43.955429 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:43.955470 56262 round_trippers.go:469] Request Headers:
I0505 14:21:43.955481 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:43.955487 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:43.957836 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:43.958298 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:43.958305 56262 round_trippers.go:469] Request Headers:
I0505 14:21:43.958310 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:43.958320 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:43.960083 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:44.455061 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:44.455081 56262 round_trippers.go:469] Request Headers:
I0505 14:21:44.455088 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:44.455091 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:44.458998 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:44.459504 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:44.459511 56262 round_trippers.go:469] Request Headers:
I0505 14:21:44.459517 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:44.459521 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:44.461518 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:44.956537 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:44.956577 56262 round_trippers.go:469] Request Headers:
I0505 14:21:44.956598 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:44.956604 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:44.959253 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:44.959715 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:44.959723 56262 round_trippers.go:469] Request Headers:
I0505 14:21:44.959729 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:44.959733 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:44.961411 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:45.455377 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:45.455402 56262 round_trippers.go:469] Request Headers:
I0505 14:21:45.455414 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:45.455420 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:45.458080 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:45.458718 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:45.458729 56262 round_trippers.go:469] Request Headers:
I0505 14:21:45.458736 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:45.458752 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:45.463742 56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0505 14:21:45.464348 56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
I0505 14:21:45.955580 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:45.955620 56262 round_trippers.go:469] Request Headers:
I0505 14:21:45.955630 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:45.955635 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:45.957968 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:45.958442 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:45.958449 56262 round_trippers.go:469] Request Headers:
I0505 14:21:45.958455 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:45.958466 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:45.959999 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:46.457118 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:46.457136 56262 round_trippers.go:469] Request Headers:
I0505 14:21:46.457145 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:46.457149 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:46.459543 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:46.460023 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:46.460031 56262 round_trippers.go:469] Request Headers:
I0505 14:21:46.460036 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:46.460047 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:46.461647 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:46.956302 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:46.956318 56262 round_trippers.go:469] Request Headers:
I0505 14:21:46.956324 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:46.956326 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:46.958416 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:46.958859 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:46.958866 56262 round_trippers.go:469] Request Headers:
I0505 14:21:46.958872 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:46.958874 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:46.960501 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:47.456753 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:47.456797 56262 round_trippers.go:469] Request Headers:
I0505 14:21:47.456806 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:47.456812 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:47.458891 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:47.459328 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:47.459336 56262 round_trippers.go:469] Request Headers:
I0505 14:21:47.459342 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:47.459345 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:47.460911 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:47.955503 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:47.955545 56262 round_trippers.go:469] Request Headers:
I0505 14:21:47.955558 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:47.955564 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:47.959575 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:47.960158 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:47.960166 56262 round_trippers.go:469] Request Headers:
I0505 14:21:47.960171 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:47.960175 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:47.961799 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:47.962164 56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
I0505 14:21:48.456730 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:48.456747 56262 round_trippers.go:469] Request Headers:
I0505 14:21:48.456753 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:48.456757 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:48.460539 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:48.461047 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:48.461055 56262 round_trippers.go:469] Request Headers:
I0505 14:21:48.461061 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:48.461064 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:48.465508 56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0505 14:21:48.465989 56262 pod_ready.go:92] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:48.465998 56262 pod_ready.go:81] duration metric: took 9.510763792s for pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
I0505 14:21:48.466006 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
I0505 14:21:48.466042 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m03
I0505 14:21:48.466047 56262 round_trippers.go:469] Request Headers:
I0505 14:21:48.466052 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:48.466055 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:48.472370 56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0505 14:21:48.473005 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
I0505 14:21:48.473012 56262 round_trippers.go:469] Request Headers:
I0505 14:21:48.473017 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:48.473020 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:48.481996 56262 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0505 14:21:48.482501 56262 pod_ready.go:92] pod "kube-controller-manager-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:48.482510 56262 pod_ready.go:81] duration metric: took 16.497528ms for pod "kube-controller-manager-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
I0505 14:21:48.482517 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5jwqs" in "kube-system" namespace to be "Ready" ...
I0505 14:21:48.482551 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:48.482556 56262 round_trippers.go:469] Request Headers:
I0505 14:21:48.482561 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:48.482565 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:48.490468 56262 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0505 14:21:48.491138 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:48.491145 56262 round_trippers.go:469] Request Headers:
I0505 14:21:48.491151 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:48.491155 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:48.494380 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:48.983087 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:49.004024 56262 round_trippers.go:469] Request Headers:
I0505 14:21:49.004031 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:49.004035 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:49.006380 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:49.007016 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:49.007024 56262 round_trippers.go:469] Request Headers:
I0505 14:21:49.007030 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:49.007033 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:49.008914 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:49.483919 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:49.483931 56262 round_trippers.go:469] Request Headers:
I0505 14:21:49.483938 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:49.483941 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:49.486104 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:49.486673 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:49.486681 56262 round_trippers.go:469] Request Headers:
I0505 14:21:49.486687 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:49.486691 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:49.488609 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:49.983081 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:49.983096 56262 round_trippers.go:469] Request Headers:
I0505 14:21:49.983104 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:49.983108 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:49.985873 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:49.986420 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:49.986428 56262 round_trippers.go:469] Request Headers:
I0505 14:21:49.986434 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:49.986437 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:49.988349 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:50.482957 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:50.482970 56262 round_trippers.go:469] Request Headers:
I0505 14:21:50.482976 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:50.482980 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:50.485479 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:50.485920 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:50.485927 56262 round_trippers.go:469] Request Headers:
I0505 14:21:50.485934 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:50.485938 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:50.487720 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:50.488107 56262 pod_ready.go:102] pod "kube-proxy-5jwqs" in "kube-system" namespace has status "Ready":"False"
I0505 14:21:50.983210 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:50.983225 56262 round_trippers.go:469] Request Headers:
I0505 14:21:50.983232 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:50.983236 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:50.986255 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:50.986840 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:50.986849 56262 round_trippers.go:469] Request Headers:
I0505 14:21:50.986855 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:50.986866 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:50.989948 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:51.483355 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:51.483374 56262 round_trippers.go:469] Request Headers:
I0505 14:21:51.483388 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:51.483395 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:51.486820 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:51.487280 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:51.487287 56262 round_trippers.go:469] Request Headers:
I0505 14:21:51.487293 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:51.487297 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:51.489325 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:51.983090 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:51.983105 56262 round_trippers.go:469] Request Headers:
I0505 14:21:51.983112 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:51.983115 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:51.984988 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:51.985393 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:51.985401 56262 round_trippers.go:469] Request Headers:
I0505 14:21:51.985405 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:51.985410 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:51.986930 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:52.484493 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:52.484507 56262 round_trippers.go:469] Request Headers:
I0505 14:21:52.484516 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:52.484521 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:52.487250 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:52.487686 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:52.487694 56262 round_trippers.go:469] Request Headers:
I0505 14:21:52.487698 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:52.487702 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:52.489501 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:52.489895 56262 pod_ready.go:102] pod "kube-proxy-5jwqs" in "kube-system" namespace has status "Ready":"False"
I0505 14:21:52.983025 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:52.983048 56262 round_trippers.go:469] Request Headers:
I0505 14:21:52.983059 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:52.983066 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:52.986110 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:52.986621 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:52.986629 56262 round_trippers.go:469] Request Headers:
I0505 14:21:52.986634 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:52.986639 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:52.988098 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:53.484742 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:53.484762 56262 round_trippers.go:469] Request Headers:
I0505 14:21:53.484773 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:53.484779 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:53.488010 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:53.488477 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:53.488487 56262 round_trippers.go:469] Request Headers:
I0505 14:21:53.488495 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:53.488501 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:53.490598 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:53.982981 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:54.035555 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.035577 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.035582 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:54.038056 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:54.038420 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:54.038427 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.038431 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.038436 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:54.040740 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:54.483231 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:54.483250 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.483259 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.483268 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:54.486904 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:54.487432 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:54.487440 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.487445 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:54.487453 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.489085 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:54.489450 56262 pod_ready.go:92] pod "kube-proxy-5jwqs" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:54.489459 56262 pod_ready.go:81] duration metric: took 6.006607245s for pod "kube-proxy-5jwqs" in "kube-system" namespace to be "Ready" ...
I0505 14:21:54.489472 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b45s6" in "kube-system" namespace to be "Ready" ...
I0505 14:21:54.489506 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b45s6
I0505 14:21:54.489511 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.489516 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.489520 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:54.491341 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:54.492125 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m04
I0505 14:21:54.492155 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.492161 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.492166 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:54.494017 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:54.494387 56262 pod_ready.go:92] pod "kube-proxy-b45s6" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:54.494395 56262 pod_ready.go:81] duration metric: took 4.917824ms for pod "kube-proxy-b45s6" in "kube-system" namespace to be "Ready" ...
I0505 14:21:54.494401 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kppdj" in "kube-system" namespace to be "Ready" ...
I0505 14:21:54.494436 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:54.494441 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.494447 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.494452 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:54.496166 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:54.496620 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:54.496627 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.496633 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.496637 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:54.498306 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:54.996074 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:54.996123 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.996136 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.996145 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:54.999201 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:54.999706 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:54.999714 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.999720 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.999724 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:55.001519 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:55.495423 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:55.495482 56262 round_trippers.go:469] Request Headers:
I0505 14:21:55.495494 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:55.495500 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:55.498280 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:55.498730 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:55.498738 56262 round_trippers.go:469] Request Headers:
I0505 14:21:55.498744 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:55.498748 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:55.500462 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:55.995317 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:55.995337 56262 round_trippers.go:469] Request Headers:
I0505 14:21:55.995349 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:55.995356 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:55.998789 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:55.999222 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:55.999231 56262 round_trippers.go:469] Request Headers:
I0505 14:21:55.999238 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:55.999241 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:56.001041 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:56.494888 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:56.494946 56262 round_trippers.go:469] Request Headers:
I0505 14:21:56.494958 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:56.494968 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:56.497790 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:56.498347 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:56.498358 56262 round_trippers.go:469] Request Headers:
I0505 14:21:56.498365 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:56.498371 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:56.500278 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:56.500656 56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
I0505 14:21:56.994875 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:56.994892 56262 round_trippers.go:469] Request Headers:
I0505 14:21:56.994900 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:56.994906 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:56.998618 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:56.999206 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:56.999214 56262 round_trippers.go:469] Request Headers:
I0505 14:21:56.999220 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:56.999223 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:57.000855 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:57.495334 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:57.495358 56262 round_trippers.go:469] Request Headers:
I0505 14:21:57.495370 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:57.495375 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:57.498502 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:57.498951 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:57.498958 56262 round_trippers.go:469] Request Headers:
I0505 14:21:57.498963 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:57.498966 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:57.500746 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:57.995520 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:57.995543 56262 round_trippers.go:469] Request Headers:
I0505 14:21:57.995579 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:57.995598 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:57.998407 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:57.998972 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:57.998979 56262 round_trippers.go:469] Request Headers:
I0505 14:21:57.998985 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:57.999001 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:58.000625 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:58.495031 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:58.495049 56262 round_trippers.go:469] Request Headers:
I0505 14:21:58.495061 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:58.495067 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:58.498099 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:58.498667 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:58.498677 56262 round_trippers.go:469] Request Headers:
I0505 14:21:58.498685 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:58.498691 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:58.500315 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:58.995219 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:59.001733 56262 round_trippers.go:469] Request Headers:
I0505 14:21:59.001744 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:59.001750 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:59.004276 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:59.004776 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:59.004783 56262 round_trippers.go:469] Request Headers:
I0505 14:21:59.004788 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:59.004792 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:59.006346 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:59.006731 56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
I0505 14:21:59.495209 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:59.495224 56262 round_trippers.go:469] Request Headers:
I0505 14:21:59.495243 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:59.495269 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:59.498470 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:59.498897 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:59.498905 56262 round_trippers.go:469] Request Headers:
I0505 14:21:59.498911 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:59.498915 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:59.501440 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:59.995151 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:59.995179 56262 round_trippers.go:469] Request Headers:
I0505 14:21:59.995191 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:59.995198 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:59.998453 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:59.999020 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:59.999031 56262 round_trippers.go:469] Request Headers:
I0505 14:21:59.999039 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:59.999043 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:00.000983 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:00.495135 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:00.495148 56262 round_trippers.go:469] Request Headers:
I0505 14:22:00.495154 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:00.495158 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:00.498254 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:22:00.499175 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:00.499184 56262 round_trippers.go:469] Request Headers:
I0505 14:22:00.499190 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:00.499193 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:00.501895 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:00.995194 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:00.995216 56262 round_trippers.go:469] Request Headers:
I0505 14:22:00.995229 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:00.995237 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:00.998468 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:22:00.998920 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:00.998926 56262 round_trippers.go:469] Request Headers:
I0505 14:22:00.998932 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:00.998935 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:01.000600 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:01.494835 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:01.494860 56262 round_trippers.go:469] Request Headers:
I0505 14:22:01.494871 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:01.494877 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:01.497889 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:22:01.498547 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:01.498554 56262 round_trippers.go:469] Request Headers:
I0505 14:22:01.498558 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:01.498561 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:01.500447 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:01.500751 56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
I0505 14:22:01.996453 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:01.996472 56262 round_trippers.go:469] Request Headers:
I0505 14:22:01.996483 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:01.996490 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:01.999407 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:01.999918 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:01.999925 56262 round_trippers.go:469] Request Headers:
I0505 14:22:01.999931 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:01.999934 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:02.001706 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:02.495361 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:02.495382 56262 round_trippers.go:469] Request Headers:
I0505 14:22:02.495393 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:02.495400 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:02.498902 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:22:02.499504 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:02.499511 56262 round_trippers.go:469] Request Headers:
I0505 14:22:02.499517 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:02.499521 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:02.501049 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:02.995527 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:02.995548 56262 round_trippers.go:469] Request Headers:
I0505 14:22:02.995559 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:02.995565 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:02.998530 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:02.998981 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:02.998988 56262 round_trippers.go:469] Request Headers:
I0505 14:22:02.998994 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:02.998999 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:03.000798 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:03.495714 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:03.495730 56262 round_trippers.go:469] Request Headers:
I0505 14:22:03.495737 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:03.495741 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:03.498051 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:03.498563 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:03.498571 56262 round_trippers.go:469] Request Headers:
I0505 14:22:03.498576 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:03.498588 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:03.500374 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:03.995061 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:04.002434 56262 round_trippers.go:469] Request Headers:
I0505 14:22:04.002442 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:04.002447 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:04.004861 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:04.005402 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:04.005409 56262 round_trippers.go:469] Request Headers:
I0505 14:22:04.005415 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:04.005418 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:04.011753 56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0505 14:22:04.012403 56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
I0505 14:22:04.494873 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:04.494893 56262 round_trippers.go:469] Request Headers:
I0505 14:22:04.494902 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:04.494906 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:04.497460 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:04.497938 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:04.497946 56262 round_trippers.go:469] Request Headers:
I0505 14:22:04.497951 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:04.497960 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:04.499356 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:04.995159 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:04.995178 56262 round_trippers.go:469] Request Headers:
I0505 14:22:04.995188 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:04.995195 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:04.998687 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:22:04.999335 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:04.999342 56262 round_trippers.go:469] Request Headers:
I0505 14:22:04.999348 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:04.999353 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.000905 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:05.494984 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:05.494997 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.495003 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.495007 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.497333 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:05.497727 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:05.497735 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.497741 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.497744 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.499501 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:05.500069 56262 pod_ready.go:92] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"True"
I0505 14:22:05.500079 56262 pod_ready.go:81] duration metric: took 11.005361676s for pod "kube-proxy-kppdj" in "kube-system" namespace to be "Ready" ...
I0505 14:22:05.500095 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zwgd2" in "kube-system" namespace to be "Ready" ...
I0505 14:22:05.500132 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zwgd2
I0505 14:22:05.500137 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.500142 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.500146 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.502320 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:05.502750 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
I0505 14:22:05.502757 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.502763 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.502767 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.504769 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:05.505126 56262 pod_ready.go:92] pod "kube-proxy-zwgd2" in "kube-system" namespace has status "Ready":"True"
I0505 14:22:05.505135 56262 pod_ready.go:81] duration metric: took 5.036025ms for pod "kube-proxy-zwgd2" in "kube-system" namespace to be "Ready" ...
I0505 14:22:05.505142 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000" in "kube-system" namespace to be "Ready" ...
I0505 14:22:05.505179 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000
I0505 14:22:05.505184 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.505189 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.505194 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.507083 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:05.507461 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:05.507468 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.507473 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.507477 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.509224 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:05.509709 56262 pod_ready.go:92] pod "kube-scheduler-ha-671000" in "kube-system" namespace has status "Ready":"True"
I0505 14:22:05.509724 56262 pod_ready.go:81] duration metric: took 4.57068ms for pod "kube-scheduler-ha-671000" in "kube-system" namespace to be "Ready" ...
I0505 14:22:05.509732 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
I0505 14:22:05.509767 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000-m02
I0505 14:22:05.509771 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.509777 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.509780 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.511597 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:05.511989 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:22:05.511996 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.512000 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.512010 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.514080 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:05.514548 56262 pod_ready.go:92] pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
I0505 14:22:05.514556 56262 pod_ready.go:81] duration metric: took 4.819427ms for pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
I0505 14:22:05.514563 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
I0505 14:22:05.514599 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000-m03
I0505 14:22:05.514603 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.514609 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.514612 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.516436 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:05.516907 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
I0505 14:22:05.516914 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.516919 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.516923 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.519043 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:05.519280 56262 pod_ready.go:92] pod "kube-scheduler-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
I0505 14:22:05.519288 56262 pod_ready.go:81] duration metric: took 4.719804ms for pod "kube-scheduler-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
I0505 14:22:05.519294 56262 pod_ready.go:38] duration metric: took 28.365933714s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0505 14:22:05.519320 56262 api_server.go:52] waiting for apiserver process to appear ...
I0505 14:22:05.519375 56262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0505 14:22:05.533426 56262 api_server.go:72] duration metric: took 37.809561996s to wait for apiserver process to appear ...
I0505 14:22:05.533438 56262 api_server.go:88] waiting for apiserver healthz status ...
I0505 14:22:05.533454 56262 api_server.go:253] Checking apiserver healthz at https://192.169.0.51:8443/healthz ...
I0505 14:22:05.537141 56262 api_server.go:279] https://192.169.0.51:8443/healthz returned 200:
ok
I0505 14:22:05.537173 56262 round_trippers.go:463] GET https://192.169.0.51:8443/version
I0505 14:22:05.537183 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.537191 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.537195 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.537884 56262 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0505 14:22:05.538028 56262 api_server.go:141] control plane version: v1.30.0
I0505 14:22:05.538038 56262 api_server.go:131] duration metric: took 4.594882ms to wait for apiserver health ...
I0505 14:22:05.538049 56262 system_pods.go:43] waiting for kube-system pods to appear ...
I0505 14:22:05.696401 56262 request.go:629] Waited for 158.305976ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
I0505 14:22:05.696517 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
I0505 14:22:05.696529 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.696539 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.696547 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.703009 56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0505 14:22:05.708412 56262 system_pods.go:59] 26 kube-system pods found
I0505 14:22:05.708432 56262 system_pods.go:61] "coredns-7db6d8ff4d-hqtd2" [e76b43f2-8189-4e5d-adc3-ced554e9ee07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0505 14:22:05.708439 56262 system_pods.go:61] "coredns-7db6d8ff4d-kjf54" [c780145e-9d82-4451-94e8-dee09a63eadb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0505 14:22:05.708445 56262 system_pods.go:61] "etcd-ha-671000" [ea35bd5e-5a34-48e9-a9b7-4b200b88fb13] Running
I0505 14:22:05.708448 56262 system_pods.go:61] "etcd-ha-671000-m02" [15f721f6-9618-44f4-9160-dbf9f0a41f73] Running
I0505 14:22:05.708451 56262 system_pods.go:61] "etcd-ha-671000-m03" [67d2962f-d3f7-42d2-8334-cc42cb3ca5a5] Running
I0505 14:22:05.708458 56262 system_pods.go:61] "kindnet-cbt9x" [c35bdc79-4b12-4822-ae38-767c7d16c96a] Running
I0505 14:22:05.708462 56262 system_pods.go:61] "kindnet-ffg2p" [043d485d-6127-4de3-9e4f-cfbf554fa987] Running
I0505 14:22:05.708464 56262 system_pods.go:61] "kindnet-kn94d" [863e615e-f22a-4d15-8510-4f5c7a42b8cd] Running
I0505 14:22:05.708468 56262 system_pods.go:61] "kindnet-zvz9x" [17260177-9933-46e9-85d2-86fe51806c25] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0505 14:22:05.708471 56262 system_pods.go:61] "kube-apiserver-ha-671000" [a6f2585c-aec7-4ba9-aa78-e22c55a798ea] Running
I0505 14:22:05.708474 56262 system_pods.go:61] "kube-apiserver-ha-671000-m02" [bbb10014-5b6a-4377-8e62-a26e6925ee07] Running
I0505 14:22:05.708477 56262 system_pods.go:61] "kube-apiserver-ha-671000-m03" [83168cbb-d4d0-4793-9f59-1c8cd4c2616f] Running
I0505 14:22:05.708482 56262 system_pods.go:61] "kube-controller-manager-ha-671000" [9f4e9073-99da-4ed5-8b5f-72106d630807] Running
I0505 14:22:05.708487 56262 system_pods.go:61] "kube-controller-manager-ha-671000-m02" [074a2d58-b5a5-4fd7-8c03-c1a357ee0c4f] Running
I0505 14:22:05.708489 56262 system_pods.go:61] "kube-controller-manager-ha-671000-m03" [c7fa4cb4-20a0-431f-8f1a-fb9bf2f0d702] Running
I0505 14:22:05.708493 56262 system_pods.go:61] "kube-proxy-5jwqs" [72f1cbf9-ca3e-4354-a8f8-7239c77af74a] Running
I0505 14:22:05.708495 56262 system_pods.go:61] "kube-proxy-b45s6" [4d403d96-8102-44d7-a76f-3a64f30a7132] Running
I0505 14:22:05.708497 56262 system_pods.go:61] "kube-proxy-kppdj" [5b47d66e-31b1-4892-85ef-0c3ad3bec4cb] Running
I0505 14:22:05.708500 56262 system_pods.go:61] "kube-proxy-zwgd2" [e87cf8e2-923f-499e-a740-60cd8b02b805] Running
I0505 14:22:05.708502 56262 system_pods.go:61] "kube-scheduler-ha-671000" [1ae249c2-7cd6-4c14-80aa-1d88d491dfc2] Running
I0505 14:22:05.708505 56262 system_pods.go:61] "kube-scheduler-ha-671000-m02" [27dd9d5f-8e2a-4597-b743-0c79fa5df5b1] Running
I0505 14:22:05.708507 56262 system_pods.go:61] "kube-scheduler-ha-671000-m03" [0c85a6ad-ae3e-4895-8a7f-f36385b1eb0b] Running
I0505 14:22:05.708510 56262 system_pods.go:61] "kube-vip-ha-671000" [dcc7956d-0333-45ed-afff-b8429485ef9a] Running
I0505 14:22:05.708512 56262 system_pods.go:61] "kube-vip-ha-671000-m02" [9a09b965-61cb-4026-9bcd-0daa29f18c86] Running
I0505 14:22:05.708515 56262 system_pods.go:61] "kube-vip-ha-671000-m03" [4866dc28-b7e1-4387-8e4a-cf819f426faa] Running
I0505 14:22:05.708520 56262 system_pods.go:61] "storage-provisioner" [f376315c-5f9b-46f4-b295-6d7d025063bc] Running
I0505 14:22:05.708525 56262 system_pods.go:74] duration metric: took 170.469417ms to wait for pod list to return data ...
I0505 14:22:05.708531 56262 default_sa.go:34] waiting for default service account to be created ...
I0505 14:22:05.897069 56262 request.go:629] Waited for 188.474109ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/default/serviceaccounts
I0505 14:22:05.897179 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/default/serviceaccounts
I0505 14:22:05.897186 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.897194 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.897199 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.950188 56262 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
I0505 14:22:05.950392 56262 default_sa.go:45] found service account: "default"
I0505 14:22:05.950405 56262 default_sa.go:55] duration metric: took 241.864725ms for default service account to be created ...
I0505 14:22:05.950412 56262 system_pods.go:116] waiting for k8s-apps to be running ...
I0505 14:22:06.095263 56262 request.go:629] Waited for 144.804696ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
I0505 14:22:06.095366 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
I0505 14:22:06.095376 56262 round_trippers.go:469] Request Headers:
I0505 14:22:06.095388 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:06.095395 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:06.102144 56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0505 14:22:06.107768 56262 system_pods.go:86] 26 kube-system pods found
I0505 14:22:06.107783 56262 system_pods.go:89] "coredns-7db6d8ff4d-hqtd2" [e76b43f2-8189-4e5d-adc3-ced554e9ee07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0505 14:22:06.107794 56262 system_pods.go:89] "coredns-7db6d8ff4d-kjf54" [c780145e-9d82-4451-94e8-dee09a63eadb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0505 14:22:06.107800 56262 system_pods.go:89] "etcd-ha-671000" [ea35bd5e-5a34-48e9-a9b7-4b200b88fb13] Running
I0505 14:22:06.107803 56262 system_pods.go:89] "etcd-ha-671000-m02" [15f721f6-9618-44f4-9160-dbf9f0a41f73] Running
I0505 14:22:06.107808 56262 system_pods.go:89] "etcd-ha-671000-m03" [67d2962f-d3f7-42d2-8334-cc42cb3ca5a5] Running
I0505 14:22:06.107811 56262 system_pods.go:89] "kindnet-cbt9x" [c35bdc79-4b12-4822-ae38-767c7d16c96a] Running
I0505 14:22:06.107815 56262 system_pods.go:89] "kindnet-ffg2p" [043d485d-6127-4de3-9e4f-cfbf554fa987] Running
I0505 14:22:06.107818 56262 system_pods.go:89] "kindnet-kn94d" [863e615e-f22a-4d15-8510-4f5c7a42b8cd] Running
I0505 14:22:06.107823 56262 system_pods.go:89] "kindnet-zvz9x" [17260177-9933-46e9-85d2-86fe51806c25] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0505 14:22:06.107826 56262 system_pods.go:89] "kube-apiserver-ha-671000" [a6f2585c-aec7-4ba9-aa78-e22c55a798ea] Running
I0505 14:22:06.107831 56262 system_pods.go:89] "kube-apiserver-ha-671000-m02" [bbb10014-5b6a-4377-8e62-a26e6925ee07] Running
I0505 14:22:06.107834 56262 system_pods.go:89] "kube-apiserver-ha-671000-m03" [83168cbb-d4d0-4793-9f59-1c8cd4c2616f] Running
I0505 14:22:06.107838 56262 system_pods.go:89] "kube-controller-manager-ha-671000" [9f4e9073-99da-4ed5-8b5f-72106d630807] Running
I0505 14:22:06.107842 56262 system_pods.go:89] "kube-controller-manager-ha-671000-m02" [074a2d58-b5a5-4fd7-8c03-c1a357ee0c4f] Running
I0505 14:22:06.107847 56262 system_pods.go:89] "kube-controller-manager-ha-671000-m03" [c7fa4cb4-20a0-431f-8f1a-fb9bf2f0d702] Running
I0505 14:22:06.107854 56262 system_pods.go:89] "kube-proxy-5jwqs" [72f1cbf9-ca3e-4354-a8f8-7239c77af74a] Running
I0505 14:22:06.107862 56262 system_pods.go:89] "kube-proxy-b45s6" [4d403d96-8102-44d7-a76f-3a64f30a7132] Running
I0505 14:22:06.107866 56262 system_pods.go:89] "kube-proxy-kppdj" [5b47d66e-31b1-4892-85ef-0c3ad3bec4cb] Running
I0505 14:22:06.107869 56262 system_pods.go:89] "kube-proxy-zwgd2" [e87cf8e2-923f-499e-a740-60cd8b02b805] Running
I0505 14:22:06.107874 56262 system_pods.go:89] "kube-scheduler-ha-671000" [1ae249c2-7cd6-4c14-80aa-1d88d491dfc2] Running
I0505 14:22:06.107877 56262 system_pods.go:89] "kube-scheduler-ha-671000-m02" [27dd9d5f-8e2a-4597-b743-0c79fa5df5b1] Running
I0505 14:22:06.107887 56262 system_pods.go:89] "kube-scheduler-ha-671000-m03" [0c85a6ad-ae3e-4895-8a7f-f36385b1eb0b] Running
I0505 14:22:06.107890 56262 system_pods.go:89] "kube-vip-ha-671000" [dcc7956d-0333-45ed-afff-b8429485ef9a] Running
I0505 14:22:06.107894 56262 system_pods.go:89] "kube-vip-ha-671000-m02" [9a09b965-61cb-4026-9bcd-0daa29f18c86] Running
I0505 14:22:06.107897 56262 system_pods.go:89] "kube-vip-ha-671000-m03" [4866dc28-b7e1-4387-8e4a-cf819f426faa] Running
I0505 14:22:06.107900 56262 system_pods.go:89] "storage-provisioner" [f376315c-5f9b-46f4-b295-6d7d025063bc] Running
I0505 14:22:06.107905 56262 system_pods.go:126] duration metric: took 157.48572ms to wait for k8s-apps to be running ...
I0505 14:22:06.107910 56262 system_svc.go:44] waiting for kubelet service to be running ....
I0505 14:22:06.107954 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0505 14:22:06.119916 56262 system_svc.go:56] duration metric: took 12.002036ms WaitForService to wait for kubelet
I0505 14:22:06.119930 56262 kubeadm.go:576] duration metric: took 38.396059047s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0505 14:22:06.119941 56262 node_conditions.go:102] verifying NodePressure condition ...
I0505 14:22:06.295252 56262 request.go:629] Waited for 175.271788ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes
I0505 14:22:06.295332 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes
I0505 14:22:06.295338 56262 round_trippers.go:469] Request Headers:
I0505 14:22:06.295345 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:06.295350 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:06.299820 56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0505 14:22:06.300760 56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0505 14:22:06.300774 56262 node_conditions.go:123] node cpu capacity is 2
I0505 14:22:06.300783 56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0505 14:22:06.300787 56262 node_conditions.go:123] node cpu capacity is 2
I0505 14:22:06.300791 56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0505 14:22:06.300794 56262 node_conditions.go:123] node cpu capacity is 2
I0505 14:22:06.300797 56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0505 14:22:06.300801 56262 node_conditions.go:123] node cpu capacity is 2
I0505 14:22:06.300804 56262 node_conditions.go:105] duration metric: took 180.85639ms to run NodePressure ...
I0505 14:22:06.300811 56262 start.go:240] waiting for startup goroutines ...
I0505 14:22:06.300829 56262 start.go:254] writing updated cluster config ...
I0505 14:22:06.322636 56262 out.go:177]
I0505 14:22:06.343913 56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:22:06.344042 56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
I0505 14:22:06.366539 56262 out.go:177] * Starting "ha-671000-m03" control-plane node in "ha-671000" cluster
I0505 14:22:06.408466 56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0505 14:22:06.408493 56262 cache.go:56] Caching tarball of preloaded images
I0505 14:22:06.408686 56262 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0505 14:22:06.408703 56262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0505 14:22:06.408834 56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
I0505 14:22:06.409908 56262 start.go:360] acquireMachinesLock for ha-671000-m03: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0505 14:22:06.409993 56262 start.go:364] duration metric: took 67.566µs to acquireMachinesLock for "ha-671000-m03"
I0505 14:22:06.410011 56262 start.go:96] Skipping create...Using existing machine configuration
I0505 14:22:06.410016 56262 fix.go:54] fixHost starting: m03
I0505 14:22:06.410315 56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:22:06.410333 56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:22:06.419592 56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57925
I0505 14:22:06.419993 56262 main.go:141] libmachine: () Calling .GetVersion
I0505 14:22:06.420359 56262 main.go:141] libmachine: Using API Version 1
I0505 14:22:06.420375 56262 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:22:06.420588 56262 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:22:06.420701 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
I0505 14:22:06.420780 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetState
I0505 14:22:06.420862 56262 main.go:141] libmachine: (ha-671000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:22:06.420955 56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid from json: 55740
I0505 14:22:06.421873 56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid 55740 missing from process table
I0505 14:22:06.421938 56262 fix.go:112] recreateIfNeeded on ha-671000-m03: state=Stopped err=<nil>
I0505 14:22:06.421958 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
W0505 14:22:06.422054 56262 fix.go:138] unexpected machine state, will restart: <nil>
I0505 14:22:06.443498 56262 out.go:177] * Restarting existing hyperkit VM for "ha-671000-m03" ...
I0505 14:22:06.485588 56262 main.go:141] libmachine: (ha-671000-m03) Calling .Start
I0505 14:22:06.485823 56262 main.go:141] libmachine: (ha-671000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:22:06.485876 56262 main.go:141] libmachine: (ha-671000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid
I0505 14:22:06.487603 56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid 55740 missing from process table
I0505 14:22:06.487617 56262 main.go:141] libmachine: (ha-671000-m03) DBG | pid 55740 is in state "Stopped"
I0505 14:22:06.487633 56262 main.go:141] libmachine: (ha-671000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid...
I0505 14:22:06.488242 56262 main.go:141] libmachine: (ha-671000-m03) DBG | Using UUID be90591f-7869-4905-ae38-2f481381ca7c
I0505 14:22:06.514163 56262 main.go:141] libmachine: (ha-671000-m03) DBG | Generated MAC ce:17:a:56:1e:f8
I0505 14:22:06.514197 56262 main.go:141] libmachine: (ha-671000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
I0505 14:22:06.514318 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"be90591f-7869-4905-ae38-2f481381ca7c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003be9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0505 14:22:06.514365 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"be90591f-7869-4905-ae38-2f481381ca7c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003be9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0505 14:22:06.514413 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "be90591f-7869-4905-ae38-2f481381ca7c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/ha-671000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/
machines/ha-671000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
I0505 14:22:06.514460 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U be90591f-7869-4905-ae38-2f481381ca7c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/ha-671000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
I0505 14:22:06.514470 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0505 14:22:06.515957 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Pid is 56300
I0505 14:22:06.516349 56262 main.go:141] libmachine: (ha-671000-m03) DBG | Attempt 0
I0505 14:22:06.516370 56262 main.go:141] libmachine: (ha-671000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:22:06.516444 56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid from json: 56300
I0505 14:22:06.518246 56262 main.go:141] libmachine: (ha-671000-m03) DBG | Searching for ce:17:a:56:1e:f8 in /var/db/dhcpd_leases ...
I0505 14:22:06.518360 56262 main.go:141] libmachine: (ha-671000-m03) DBG | Found 53 entries in /var/db/dhcpd_leases!
I0505 14:22:06.518376 56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x663949ce}
I0505 14:22:06.518417 56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x663949ba}
I0505 14:22:06.518433 56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x6637f817}
I0505 14:22:06.518449 56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x663948d2}
I0505 14:22:06.518457 56262 main.go:141] libmachine: (ha-671000-m03) DBG | Found match: ce:17:a:56:1e:f8
I0505 14:22:06.518467 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetConfigRaw
I0505 14:22:06.518473 56262 main.go:141] libmachine: (ha-671000-m03) DBG | IP: 192.169.0.53
I0505 14:22:06.519132 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetIP
I0505 14:22:06.519357 56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
I0505 14:22:06.519808 56262 machine.go:94] provisionDockerMachine start ...
I0505 14:22:06.519818 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
I0505 14:22:06.519942 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:06.520079 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:06.520182 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:06.520284 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:06.520381 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:06.520488 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:22:06.520648 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.53 22 <nil> <nil>}
I0505 14:22:06.520655 56262 main.go:141] libmachine: About to run SSH command:
hostname
I0505 14:22:06.524407 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0505 14:22:06.532556 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0505 14:22:06.533607 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0505 14:22:06.533622 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0505 14:22:06.533633 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0505 14:22:06.533644 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0505 14:22:06.917916 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0505 14:22:06.917942 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0505 14:22:07.032632 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0505 14:22:07.032653 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0505 14:22:07.032677 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0505 14:22:07.032689 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0505 14:22:07.033533 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0505 14:22:07.033546 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0505 14:22:12.402771 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0505 14:22:12.402786 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0505 14:22:12.402806 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0505 14:22:12.426606 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
I0505 14:22:41.581350 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0505 14:22:41.581367 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetMachineName
I0505 14:22:41.581506 56262 buildroot.go:166] provisioning hostname "ha-671000-m03"
I0505 14:22:41.581517 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetMachineName
I0505 14:22:41.581600 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:41.581683 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:41.581781 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.581875 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.581960 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:41.582100 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:22:41.582238 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.53 22 <nil> <nil>}
I0505 14:22:41.582247 56262 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-671000-m03 && echo "ha-671000-m03" | sudo tee /etc/hostname
I0505 14:22:41.647083 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000-m03
I0505 14:22:41.647098 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:41.647232 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:41.647343 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.647430 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.647521 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:41.647657 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:22:41.647849 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.53 22 <nil> <nil>}
I0505 14:22:41.647862 56262 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-671000-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000-m03/g' /etc/hosts;
else
echo '127.0.1.1 ha-671000-m03' | sudo tee -a /etc/hosts;
fi
fi
I0505 14:22:41.709306 56262 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0505 14:22:41.709326 56262 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
I0505 14:22:41.709344 56262 buildroot.go:174] setting up certificates
I0505 14:22:41.709357 56262 provision.go:84] configureAuth start
I0505 14:22:41.709363 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetMachineName
I0505 14:22:41.709499 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetIP
I0505 14:22:41.709593 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:41.709680 56262 provision.go:143] copyHostCerts
I0505 14:22:41.709715 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
I0505 14:22:41.709786 56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
I0505 14:22:41.709792 56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
I0505 14:22:41.709937 56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
I0505 14:22:41.710168 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
I0505 14:22:41.710212 56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
I0505 14:22:41.710217 56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
I0505 14:22:41.710297 56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
I0505 14:22:41.710445 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
I0505 14:22:41.710490 56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
I0505 14:22:41.710497 56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
I0505 14:22:41.710575 56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
I0505 14:22:41.710718 56262 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000-m03 san=[127.0.0.1 192.169.0.53 ha-671000-m03 localhost minikube]
I0505 14:22:41.753782 56262 provision.go:177] copyRemoteCerts
I0505 14:22:41.753842 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0505 14:22:41.753857 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:41.753999 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:41.754106 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.754195 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:41.754274 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
I0505 14:22:41.788993 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0505 14:22:41.789066 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0505 14:22:41.808008 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
I0505 14:22:41.808084 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0505 14:22:41.828147 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0505 14:22:41.828228 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0505 14:22:41.848543 56262 provision.go:87] duration metric: took 139.178952ms to configureAuth
I0505 14:22:41.848558 56262 buildroot.go:189] setting minikube options for container-runtime
I0505 14:22:41.848732 56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:22:41.848746 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
I0505 14:22:41.848890 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:41.848974 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:41.849066 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.849145 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.849226 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:41.849346 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:22:41.849468 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.53 22 <nil> <nil>}
I0505 14:22:41.849476 56262 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0505 14:22:41.905134 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0505 14:22:41.905147 56262 buildroot.go:70] root file system type: tmpfs
I0505 14:22:41.905226 56262 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0505 14:22:41.905236 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:41.905372 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:41.905459 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.905559 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.905645 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:41.905773 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:22:41.905913 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.53 22 <nil> <nil>}
I0505 14:22:41.905965 56262 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.169.0.51"
Environment="NO_PROXY=192.169.0.51,192.169.0.52"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0505 14:22:41.971506 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.169.0.51
Environment=NO_PROXY=192.169.0.51,192.169.0.52
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0505 14:22:41.971532 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:41.971667 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:41.971753 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.971832 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.971919 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:41.972061 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:22:41.972206 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.53 22 <nil> <nil>}
I0505 14:22:41.972218 56262 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0505 14:22:43.586757 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0505 14:22:43.586772 56262 machine.go:97] duration metric: took 37.066967123s to provisionDockerMachine
I0505 14:22:43.586795 56262 start.go:293] postStartSetup for "ha-671000-m03" (driver="hyperkit")
I0505 14:22:43.586804 56262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0505 14:22:43.586816 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
I0505 14:22:43.587008 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0505 14:22:43.587022 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:43.587109 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:43.587250 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:43.587368 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:43.587470 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
I0505 14:22:43.621728 56262 ssh_runner.go:195] Run: cat /etc/os-release
I0505 14:22:43.624913 56262 info.go:137] Remote host: Buildroot 2023.02.9
I0505 14:22:43.624927 56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
I0505 14:22:43.625027 56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
I0505 14:22:43.625208 56262 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
I0505 14:22:43.625215 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
I0505 14:22:43.625422 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0505 14:22:43.632883 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
I0505 14:22:43.652930 56262 start.go:296] duration metric: took 66.125789ms for postStartSetup
I0505 14:22:43.652964 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
I0505 14:22:43.653131 56262 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0505 14:22:43.653145 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:43.653240 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:43.653328 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:43.653413 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:43.653505 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
I0505 14:22:43.687474 56262 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0505 14:22:43.687532 56262 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0505 14:22:43.719424 56262 fix.go:56] duration metric: took 37.309414657s for fixHost
I0505 14:22:43.719447 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:43.719581 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:43.719680 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:43.719767 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:43.719859 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:43.719991 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:22:43.720140 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.53 22 <nil> <nil>}
I0505 14:22:43.720147 56262 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0505 14:22:43.777003 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944163.917671963
I0505 14:22:43.777016 56262 fix.go:216] guest clock: 1714944163.917671963
I0505 14:22:43.777022 56262 fix.go:229] Guest: 2024-05-05 14:22:43.917671963 -0700 PDT Remote: 2024-05-05 14:22:43.719438 -0700 PDT m=+114.784889102 (delta=198.233963ms)
I0505 14:22:43.777033 56262 fix.go:200] guest clock delta is within tolerance: 198.233963ms
I0505 14:22:43.777036 56262 start.go:83] releasing machines lock for "ha-671000-m03", held for 37.367046714s
I0505 14:22:43.777054 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
I0505 14:22:43.777184 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetIP
I0505 14:22:43.798458 56262 out.go:177] * Found network options:
I0505 14:22:43.818375 56262 out.go:177] - NO_PROXY=192.169.0.51,192.169.0.52
W0505 14:22:43.839196 56262 proxy.go:119] fail to check proxy env: Error ip not in block
W0505 14:22:43.839212 56262 proxy.go:119] fail to check proxy env: Error ip not in block
I0505 14:22:43.839223 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
I0505 14:22:43.839636 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
I0505 14:22:43.839763 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
I0505 14:22:43.839847 56262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0505 14:22:43.839883 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
W0505 14:22:43.839885 56262 proxy.go:119] fail to check proxy env: Error ip not in block
W0505 14:22:43.839898 56262 proxy.go:119] fail to check proxy env: Error ip not in block
I0505 14:22:43.839953 56262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0505 14:22:43.839970 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:43.839989 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:43.840065 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:43.840123 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:43.840188 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:43.840221 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:43.840303 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
I0505 14:22:43.840332 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:43.840420 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
W0505 14:22:43.919168 56262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0505 14:22:43.919245 56262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0505 14:22:43.936501 56262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0505 14:22:43.936515 56262 start.go:494] detecting cgroup driver to use...
I0505 14:22:43.936582 56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0505 14:22:43.953774 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0505 14:22:43.963068 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0505 14:22:43.972111 56262 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0505 14:22:43.972163 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0505 14:22:43.981147 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0505 14:22:44.011701 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0505 14:22:44.020897 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0505 14:22:44.030143 56262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0505 14:22:44.039491 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0505 14:22:44.048778 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0505 14:22:44.057937 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0505 14:22:44.067298 56262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0505 14:22:44.075698 56262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0505 14:22:44.083983 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:22:44.200980 56262 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0505 14:22:44.219877 56262 start.go:494] detecting cgroup driver to use...
I0505 14:22:44.219946 56262 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0505 14:22:44.236639 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0505 14:22:44.254367 56262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0505 14:22:44.271268 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0505 14:22:44.282915 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0505 14:22:44.293466 56262 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0505 14:22:44.317181 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0505 14:22:44.327878 56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0505 14:22:44.343024 56262 ssh_runner.go:195] Run: which cri-dockerd
I0505 14:22:44.346054 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0505 14:22:44.353257 56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0505 14:22:44.367082 56262 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0505 14:22:44.465180 56262 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0505 14:22:44.569600 56262 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0505 14:22:44.569629 56262 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0505 14:22:44.584431 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:22:44.680947 56262 ssh_runner.go:195] Run: sudo systemctl restart docker
I0505 14:23:45.736510 56262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.056089884s)
I0505 14:23:45.736595 56262 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0505 14:23:45.770790 56262 out.go:177]
W0505 14:23:45.791249 56262 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
May 05 21:22:41 ha-671000-m03 systemd[1]: Starting Docker Application Container Engine...
May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.352208248Z" level=info msg="Starting up"
May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.353022730Z" level=info msg="containerd not running, starting managed containerd"
May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.358767057Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.373539189Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388000547Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388073973Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388137944Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388171760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388313706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388355785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388477111Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388518957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388551610Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388580389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388726935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388950191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390520791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390570725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390706880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390751886Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390888815Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390940476Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390972496Z" level=info msg="metadata content store policy set" policy=shared
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394800432Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394883868Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394961138Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395000278Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395036706Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395111009Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395337703Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395418767Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395454129Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395484232Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395514263Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395546554Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395576938Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395607440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395641518Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395677040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395708605Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395737963Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395799761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395843188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395874408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395904381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395933636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395965927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395995431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396033716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396067448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396098841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396127871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396155969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396184510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396215668Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396250321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396280045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396307939Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396379697Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396424577Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396475305Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396510849Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396569471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396621386Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396656010Z" level=info msg="NRI interface is disabled by configuration."
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396883316Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396972499Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.397031244Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.397069101Z" level=info msg="containerd successfully booted in 0.024677s"
May 05 21:22:42 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:42.379929944Z" level=info msg="[graphdriver] trying configured driver: overlay2"
May 05 21:22:42 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:42.413119848Z" level=info msg="Loading containers: start."
May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.663705690Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.700545709Z" level=info msg="Loading containers: done."
May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.707501270Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.707669278Z" level=info msg="Daemon has completed initialization"
May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.725886686Z" level=info msg="API listen on [::]:2376"
May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.725971765Z" level=info msg="API listen on /var/run/docker.sock"
May 05 21:22:43 ha-671000-m03 systemd[1]: Started Docker Application Container Engine.
May 05 21:22:44 ha-671000-m03 systemd[1]: Stopping Docker Application Container Engine...
May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.833114404Z" level=info msg="Processing signal 'terminated'"
May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834199869Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834666188Z" level=info msg="Daemon shutdown complete"
May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834695637Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834707874Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
May 05 21:22:45 ha-671000-m03 systemd[1]: docker.service: Deactivated successfully.
May 05 21:22:45 ha-671000-m03 systemd[1]: Stopped Docker Application Container Engine.
May 05 21:22:45 ha-671000-m03 systemd[1]: Starting Docker Application Container Engine...
May 05 21:22:45 ha-671000-m03 dockerd[1073]: time="2024-05-05T21:22:45.887265470Z" level=info msg="Starting up"
May 05 21:23:45 ha-671000-m03 dockerd[1073]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
May 05 21:23:45 ha-671000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
May 05 21:23:45 ha-671000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
May 05 21:23:45 ha-671000-m03 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
May 05 21:22:41 ha-671000-m03 systemd[1]: Starting Docker Application Container Engine...
May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.352208248Z" level=info msg="Starting up"
May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.353022730Z" level=info msg="containerd not running, starting managed containerd"
May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.358767057Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.373539189Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388000547Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388073973Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388137944Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388171760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388313706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388355785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388477111Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388518957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388551610Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388580389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388726935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388950191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390520791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390570725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390706880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390751886Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390888815Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390940476Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390972496Z" level=info msg="metadata content store policy set" policy=shared
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394800432Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394883868Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394961138Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395000278Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395036706Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395111009Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395337703Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395418767Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395454129Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395484232Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395514263Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395546554Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395576938Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395607440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395641518Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395677040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395708605Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395737963Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395799761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395843188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395874408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395904381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395933636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395965927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395995431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396033716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396067448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396098841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396127871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396155969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396184510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396215668Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396250321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396280045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396307939Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396379697Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396424577Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396475305Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396510849Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396569471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396621386Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396656010Z" level=info msg="NRI interface is disabled by configuration."
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396883316Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396972499Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.397031244Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.397069101Z" level=info msg="containerd successfully booted in 0.024677s"
May 05 21:22:42 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:42.379929944Z" level=info msg="[graphdriver] trying configured driver: overlay2"
May 05 21:22:42 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:42.413119848Z" level=info msg="Loading containers: start."
May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.663705690Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.700545709Z" level=info msg="Loading containers: done."
May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.707501270Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.707669278Z" level=info msg="Daemon has completed initialization"
May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.725886686Z" level=info msg="API listen on [::]:2376"
May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.725971765Z" level=info msg="API listen on /var/run/docker.sock"
May 05 21:22:43 ha-671000-m03 systemd[1]: Started Docker Application Container Engine.
May 05 21:22:44 ha-671000-m03 systemd[1]: Stopping Docker Application Container Engine...
May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.833114404Z" level=info msg="Processing signal 'terminated'"
May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834199869Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834666188Z" level=info msg="Daemon shutdown complete"
May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834695637Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834707874Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
May 05 21:22:45 ha-671000-m03 systemd[1]: docker.service: Deactivated successfully.
May 05 21:22:45 ha-671000-m03 systemd[1]: Stopped Docker Application Container Engine.
May 05 21:22:45 ha-671000-m03 systemd[1]: Starting Docker Application Container Engine...
May 05 21:22:45 ha-671000-m03 dockerd[1073]: time="2024-05-05T21:22:45.887265470Z" level=info msg="Starting up"
May 05 21:23:45 ha-671000-m03 dockerd[1073]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
May 05 21:23:45 ha-671000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
May 05 21:23:45 ha-671000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
May 05 21:23:45 ha-671000-m03 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
W0505 14:23:45.791332 56262 out.go:239] *
*
W0505 14:23:45.791963 56262 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0505 14:23:45.854203 56262 out.go:177]
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p ha-671000 -v=7 --alsologtostderr" : exit status 90
ha_test.go:472: (dbg) Run: out/minikube-darwin-amd64 node list -p ha-671000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-671000 -n ha-671000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-darwin-amd64 -p ha-671000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-671000 logs -n 25: (3.107156771s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs:
-- stdout --
==> Audit <==
|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
| cp | ha-671000 cp ha-671000-m03:/home/docker/cp-test.txt | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | ha-671000-m02:/home/docker/cp-test_ha-671000-m03_ha-671000-m02.txt | | | | | |
| ssh | ha-671000 ssh -n | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | ha-671000-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-671000 ssh -n ha-671000-m02 sudo cat | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | /home/docker/cp-test_ha-671000-m03_ha-671000-m02.txt | | | | | |
| cp | ha-671000 cp ha-671000-m03:/home/docker/cp-test.txt | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | ha-671000-m04:/home/docker/cp-test_ha-671000-m03_ha-671000-m04.txt | | | | | |
| ssh | ha-671000 ssh -n | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | ha-671000-m03 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-671000 ssh -n ha-671000-m04 sudo cat | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | /home/docker/cp-test_ha-671000-m03_ha-671000-m04.txt | | | | | |
| cp | ha-671000 cp testdata/cp-test.txt | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | ha-671000-m04:/home/docker/cp-test.txt | | | | | |
| ssh | ha-671000 ssh -n | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | ha-671000-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile4235302821/001/cp-test_ha-671000-m04.txt | | | | | |
| ssh | ha-671000 ssh -n | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | ha-671000-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| cp | ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | ha-671000:/home/docker/cp-test_ha-671000-m04_ha-671000.txt | | | | | |
| ssh | ha-671000 ssh -n | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | ha-671000-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-671000 ssh -n ha-671000 sudo cat | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | /home/docker/cp-test_ha-671000-m04_ha-671000.txt | | | | | |
| cp | ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | ha-671000-m02:/home/docker/cp-test_ha-671000-m04_ha-671000-m02.txt | | | | | |
| ssh | ha-671000 ssh -n | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | ha-671000-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-671000 ssh -n ha-671000-m02 sudo cat | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | /home/docker/cp-test_ha-671000-m04_ha-671000-m02.txt | | | | | |
| cp | ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | ha-671000-m03:/home/docker/cp-test_ha-671000-m04_ha-671000-m03.txt | | | | | |
| ssh | ha-671000 ssh -n | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | ha-671000-m04 sudo cat | | | | | |
| | /home/docker/cp-test.txt | | | | | |
| ssh | ha-671000 ssh -n ha-671000-m03 sudo cat | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | /home/docker/cp-test_ha-671000-m04_ha-671000-m03.txt | | | | | |
| node | ha-671000 node stop m02 -v=7 | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
| | --alsologtostderr | | | | | |
| node | ha-671000 node start m02 -v=7 | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:20 PDT |
| | --alsologtostderr | | | | | |
| node | list -p ha-671000 -v=7 | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:20 PDT | |
| | --alsologtostderr | | | | | |
| stop | -p ha-671000 -v=7 | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:20 PDT | 05 May 24 14:20 PDT |
| | --alsologtostderr | | | | | |
| start | -p ha-671000 --wait=true -v=7 | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:20 PDT | |
| | --alsologtostderr | | | | | |
| node | list -p ha-671000 | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:23 PDT | |
|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/05/05 14:20:48
Running on machine: MacOS-Agent-2
Binary: Built with gc go1.22.1 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0505 14:20:48.965096 56262 out.go:291] Setting OutFile to fd 1 ...
I0505 14:20:48.965304 56262 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:20:48.965309 56262 out.go:304] Setting ErrFile to fd 2...
I0505 14:20:48.965313 56262 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:20:48.965501 56262 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
I0505 14:20:48.966984 56262 out.go:298] Setting JSON to false
I0505 14:20:48.991851 56262 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":19219,"bootTime":1714924829,"procs":425,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
W0505 14:20:48.991949 56262 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0505 14:20:49.013239 56262 out.go:177] * [ha-671000] minikube v1.33.0 on Darwin 14.4.1
I0505 14:20:49.055173 56262 out.go:177] - MINIKUBE_LOCATION=18602
I0505 14:20:49.055223 56262 notify.go:220] Checking for updates...
I0505 14:20:49.077109 56262 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
I0505 14:20:49.097964 56262 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0505 14:20:49.119233 56262 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0505 14:20:49.139935 56262 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
I0505 14:20:49.161146 56262 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0505 14:20:49.182881 56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:20:49.183046 56262 driver.go:392] Setting default libvirt URI to qemu:///system
I0505 14:20:49.183689 56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:20:49.183764 56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:20:49.193369 56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57871
I0505 14:20:49.193700 56262 main.go:141] libmachine: () Calling .GetVersion
I0505 14:20:49.194120 56262 main.go:141] libmachine: Using API Version 1
I0505 14:20:49.194134 56262 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:20:49.194326 56262 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:20:49.194462 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:20:49.223183 56262 out.go:177] * Using the hyperkit driver based on existing profile
I0505 14:20:49.265211 56262 start.go:297] selected driver: hyperkit
I0505 14:20:49.265249 56262 start.go:901] validating driver "hyperkit" against &{Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0505 14:20:49.265473 56262 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0505 14:20:49.265691 56262 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0505 14:20:49.265889 56262 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18602-53665/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0505 14:20:49.275605 56262 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
I0505 14:20:49.280711 56262 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:20:49.280731 56262 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0505 14:20:49.284127 56262 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0505 14:20:49.284202 56262 cni.go:84] Creating CNI manager for ""
I0505 14:20:49.284211 56262 cni.go:136] multinode detected (4 nodes found), recommending kindnet
I0505 14:20:49.284292 56262 start.go:340] cluster config:
{Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false he
lm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0505 14:20:49.284394 56262 iso.go:125] acquiring lock: {Name:mk0da19ac8d2d553b5039d86a6857a5ca35625a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0505 14:20:49.326088 56262 out.go:177] * Starting "ha-671000" primary control-plane node in "ha-671000" cluster
I0505 14:20:49.347002 56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0505 14:20:49.347074 56262 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
I0505 14:20:49.347098 56262 cache.go:56] Caching tarball of preloaded images
I0505 14:20:49.347288 56262 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0505 14:20:49.347306 56262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0505 14:20:49.347472 56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
I0505 14:20:49.348516 56262 start.go:360] acquireMachinesLock for ha-671000: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0505 14:20:49.348656 56262 start.go:364] duration metric: took 99.405µs to acquireMachinesLock for "ha-671000"
I0505 14:20:49.348707 56262 start.go:96] Skipping create...Using existing machine configuration
I0505 14:20:49.348726 56262 fix.go:54] fixHost starting:
I0505 14:20:49.349125 56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:20:49.349160 56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:20:49.358523 56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57873
I0505 14:20:49.358884 56262 main.go:141] libmachine: () Calling .GetVersion
I0505 14:20:49.359279 56262 main.go:141] libmachine: Using API Version 1
I0505 14:20:49.359298 56262 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:20:49.359523 56262 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:20:49.359669 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:20:49.359788 56262 main.go:141] libmachine: (ha-671000) Calling .GetState
I0505 14:20:49.359894 56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:20:49.359963 56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 55694
I0505 14:20:49.360866 56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid 55694 missing from process table
I0505 14:20:49.360926 56262 fix.go:112] recreateIfNeeded on ha-671000: state=Stopped err=<nil>
I0505 14:20:49.360950 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
W0505 14:20:49.361041 56262 fix.go:138] unexpected machine state, will restart: <nil>
I0505 14:20:49.402877 56262 out.go:177] * Restarting existing hyperkit VM for "ha-671000" ...
I0505 14:20:49.423939 56262 main.go:141] libmachine: (ha-671000) Calling .Start
I0505 14:20:49.424311 56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:20:49.424354 56262 main.go:141] libmachine: (ha-671000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid
I0505 14:20:49.426302 56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid 55694 missing from process table
I0505 14:20:49.426313 56262 main.go:141] libmachine: (ha-671000) DBG | pid 55694 is in state "Stopped"
I0505 14:20:49.426344 56262 main.go:141] libmachine: (ha-671000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid...
I0505 14:20:49.426771 56262 main.go:141] libmachine: (ha-671000) DBG | Using UUID 9389e317-b0a3-4e2d-8cc9-aa1a138ddf96
I0505 14:20:49.551381 56262 main.go:141] libmachine: (ha-671000) DBG | Generated MAC 72:52:a3:7d:5c:d1
I0505 14:20:49.551411 56262 main.go:141] libmachine: (ha-671000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
I0505 14:20:49.551646 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9389e317-b0a3-4e2d-8cc9-aa1a138ddf96", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037f290)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0505 14:20:49.551692 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9389e317-b0a3-4e2d-8cc9-aa1a138ddf96", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037f290)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0505 14:20:49.551780 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9389e317-b0a3-4e2d-8cc9-aa1a138ddf96", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/ha-671000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
I0505 14:20:49.551846 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9389e317-b0a3-4e2d-8cc9-aa1a138ddf96 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/ha-671000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
I0505 14:20:49.551864 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0505 14:20:49.553184 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Pid is 56275
I0505 14:20:49.553639 56262 main.go:141] libmachine: (ha-671000) DBG | Attempt 0
I0505 14:20:49.553663 56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:20:49.553735 56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 56275
I0505 14:20:49.555494 56262 main.go:141] libmachine: (ha-671000) DBG | Searching for 72:52:a3:7d:5c:d1 in /var/db/dhcpd_leases ...
I0505 14:20:49.555595 56262 main.go:141] libmachine: (ha-671000) DBG | Found 53 entries in /var/db/dhcpd_leases!
I0505 14:20:49.555611 56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x6637f817}
I0505 14:20:49.555629 56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394976}
I0505 14:20:49.555648 56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x663948d2}
I0505 14:20:49.555661 56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x66394853}
I0505 14:20:49.555667 56262 main.go:141] libmachine: (ha-671000) DBG | Found match: 72:52:a3:7d:5c:d1
I0505 14:20:49.555674 56262 main.go:141] libmachine: (ha-671000) DBG | IP: 192.169.0.51
I0505 14:20:49.555696 56262 main.go:141] libmachine: (ha-671000) Calling .GetConfigRaw
I0505 14:20:49.556342 56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
I0505 14:20:49.556516 56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
I0505 14:20:49.556975 56262 machine.go:94] provisionDockerMachine start ...
I0505 14:20:49.556985 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:20:49.557119 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:20:49.557222 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:20:49.557336 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:20:49.557465 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:20:49.557602 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:20:49.557742 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:20:49.557972 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.51 22 <nil> <nil>}
I0505 14:20:49.557981 56262 main.go:141] libmachine: About to run SSH command:
hostname
I0505 14:20:49.561305 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0505 14:20:49.617858 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0505 14:20:49.618520 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0505 14:20:49.618541 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0505 14:20:49.618548 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0505 14:20:49.618556 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0505 14:20:50.003923 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0505 14:20:50.003954 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0505 14:20:50.118574 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0505 14:20:50.118591 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0505 14:20:50.118604 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0505 14:20:50.118620 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0505 14:20:50.119491 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0505 14:20:50.119502 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0505 14:20:55.386088 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0505 14:20:55.386105 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0505 14:20:55.386124 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0505 14:20:55.410129 56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
I0505 14:20:59.165992 56262 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.51:22: connect: connection refused
I0505 14:21:02.226047 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0505 14:21:02.226063 56262 main.go:141] libmachine: (ha-671000) Calling .GetMachineName
I0505 14:21:02.226198 56262 buildroot.go:166] provisioning hostname "ha-671000"
I0505 14:21:02.226208 56262 main.go:141] libmachine: (ha-671000) Calling .GetMachineName
I0505 14:21:02.226303 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:02.226392 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:02.226492 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.226582 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.226673 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:02.226801 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:02.226937 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.51 22 <nil> <nil>}
I0505 14:21:02.226945 56262 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-671000 && echo "ha-671000" | sudo tee /etc/hostname
I0505 14:21:02.297369 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000
I0505 14:21:02.297395 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:02.297543 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:02.297643 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.297751 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.297848 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:02.297983 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:02.298121 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.51 22 <nil> <nil>}
I0505 14:21:02.298132 56262 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-671000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000/g' /etc/hosts;
else
echo '127.0.1.1 ha-671000' | sudo tee -a /etc/hosts;
fi
fi
I0505 14:21:02.363709 56262 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0505 14:21:02.363736 56262 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
I0505 14:21:02.363757 56262 buildroot.go:174] setting up certificates
I0505 14:21:02.363764 56262 provision.go:84] configureAuth start
I0505 14:21:02.363771 56262 main.go:141] libmachine: (ha-671000) Calling .GetMachineName
I0505 14:21:02.363911 56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
I0505 14:21:02.364012 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:02.364108 56262 provision.go:143] copyHostCerts
I0505 14:21:02.364139 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
I0505 14:21:02.364208 56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
I0505 14:21:02.364216 56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
I0505 14:21:02.364363 56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
I0505 14:21:02.364576 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
I0505 14:21:02.364616 56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
I0505 14:21:02.364621 56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
I0505 14:21:02.364702 56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
I0505 14:21:02.364858 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
I0505 14:21:02.364899 56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
I0505 14:21:02.364904 56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
I0505 14:21:02.364979 56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
I0505 14:21:02.365133 56262 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000 san=[127.0.0.1 192.169.0.51 ha-671000 localhost minikube]
I0505 14:21:02.566783 56262 provision.go:177] copyRemoteCerts
I0505 14:21:02.566851 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0505 14:21:02.566867 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:02.567002 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:02.567081 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.567166 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:02.567249 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
I0505 14:21:02.603993 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0505 14:21:02.604064 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0505 14:21:02.623864 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
I0505 14:21:02.623931 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0505 14:21:02.642984 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0505 14:21:02.643054 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0505 14:21:02.662651 56262 provision.go:87] duration metric: took 298.874135ms to configureAuth
I0505 14:21:02.662663 56262 buildroot.go:189] setting minikube options for container-runtime
I0505 14:21:02.662832 56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:21:02.662845 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:21:02.662976 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:02.663065 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:02.663164 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.663269 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.663357 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:02.663467 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:02.663594 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.51 22 <nil> <nil>}
I0505 14:21:02.663602 56262 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0505 14:21:02.721847 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0505 14:21:02.721864 56262 buildroot.go:70] root file system type: tmpfs
I0505 14:21:02.721944 56262 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0505 14:21:02.721957 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:02.722094 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:02.722182 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.722290 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.722379 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:02.722504 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:02.722641 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.51 22 <nil> <nil>}
I0505 14:21:02.722685 56262 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0505 14:21:02.791477 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0505 14:21:02.791499 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:02.791628 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:02.791713 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.791806 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:02.791895 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:02.792000 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:02.792138 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.51 22 <nil> <nil>}
I0505 14:21:02.792148 56262 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0505 14:21:04.463791 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0505 14:21:04.463805 56262 machine.go:97] duration metric: took 14.90688888s to provisionDockerMachine
I0505 14:21:04.463814 56262 start.go:293] postStartSetup for "ha-671000" (driver="hyperkit")
I0505 14:21:04.463821 56262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0505 14:21:04.463832 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:21:04.464011 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0505 14:21:04.464034 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:04.464144 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:04.464235 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:04.464343 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:04.464431 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
I0505 14:21:04.510297 56262 ssh_runner.go:195] Run: cat /etc/os-release
I0505 14:21:04.514333 56262 info.go:137] Remote host: Buildroot 2023.02.9
I0505 14:21:04.514346 56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
I0505 14:21:04.514446 56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
I0505 14:21:04.514637 56262 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
I0505 14:21:04.514644 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
I0505 14:21:04.514851 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0505 14:21:04.528097 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
I0505 14:21:04.557607 56262 start.go:296] duration metric: took 93.785206ms for postStartSetup
I0505 14:21:04.557630 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:21:04.557802 56262 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0505 14:21:04.557815 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:04.557914 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:04.558026 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:04.558104 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:04.558180 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
I0505 14:21:04.595384 56262 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0505 14:21:04.595439 56262 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0505 14:21:04.627954 56262 fix.go:56] duration metric: took 15.279298664s for fixHost
I0505 14:21:04.627978 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:04.628106 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:04.628210 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:04.628316 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:04.628400 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:04.628519 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:04.628664 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.51 22 <nil> <nil>}
I0505 14:21:04.628671 56262 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0505 14:21:04.687788 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944064.851392424
I0505 14:21:04.687801 56262 fix.go:216] guest clock: 1714944064.851392424
I0505 14:21:04.687806 56262 fix.go:229] Guest: 2024-05-05 14:21:04.851392424 -0700 PDT Remote: 2024-05-05 14:21:04.627967 -0700 PDT m=+15.708271847 (delta=223.425424ms)
I0505 14:21:04.687822 56262 fix.go:200] guest clock delta is within tolerance: 223.425424ms
I0505 14:21:04.687828 56262 start.go:83] releasing machines lock for "ha-671000", held for 15.339229169s
I0505 14:21:04.687844 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:21:04.687975 56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
I0505 14:21:04.688073 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:21:04.688362 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:21:04.688461 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:21:04.688537 56262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0505 14:21:04.688563 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:04.688585 56262 ssh_runner.go:195] Run: cat /version.json
I0505 14:21:04.688594 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:04.688666 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:04.688681 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:04.688776 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:04.688794 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:04.688857 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:04.688870 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:04.688932 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
I0505 14:21:04.688951 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
I0505 14:21:04.773179 56262 ssh_runner.go:195] Run: systemctl --version
I0505 14:21:04.778074 56262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0505 14:21:04.782225 56262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0505 14:21:04.782267 56262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0505 14:21:04.795505 56262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0505 14:21:04.795515 56262 start.go:494] detecting cgroup driver to use...
I0505 14:21:04.795626 56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0505 14:21:04.813193 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0505 14:21:04.822043 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0505 14:21:04.830859 56262 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0505 14:21:04.830912 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0505 14:21:04.839650 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0505 14:21:04.848348 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0505 14:21:04.857332 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0505 14:21:04.866100 56262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0505 14:21:04.874955 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0505 14:21:04.883995 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0505 14:21:04.892686 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0505 14:21:04.901641 56262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0505 14:21:04.909531 56262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0505 14:21:04.917434 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:05.025345 56262 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0505 14:21:05.045401 56262 start.go:494] detecting cgroup driver to use...
I0505 14:21:05.045483 56262 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0505 14:21:05.056970 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0505 14:21:05.067558 56262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0505 14:21:05.082472 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0505 14:21:05.093595 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0505 14:21:05.104660 56262 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0505 14:21:05.123434 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0505 14:21:05.136644 56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0505 14:21:05.151834 56262 ssh_runner.go:195] Run: which cri-dockerd
I0505 14:21:05.154642 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0505 14:21:05.162375 56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0505 14:21:05.175761 56262 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0505 14:21:05.270844 56262 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0505 14:21:05.375810 56262 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0505 14:21:05.375883 56262 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0505 14:21:05.390245 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:05.495960 56262 ssh_runner.go:195] Run: sudo systemctl restart docker
I0505 14:21:07.797662 56262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.301692609s)
I0505 14:21:07.797733 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0505 14:21:07.809357 56262 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I0505 14:21:07.822066 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0505 14:21:07.832350 56262 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0505 14:21:07.930252 56262 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0505 14:21:08.029360 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:08.124190 56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0505 14:21:08.137986 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0505 14:21:08.149027 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:08.258895 56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0505 14:21:08.326102 56262 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0505 14:21:08.326177 56262 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0505 14:21:08.330736 56262 start.go:562] Will wait 60s for crictl version
I0505 14:21:08.330787 56262 ssh_runner.go:195] Run: which crictl
I0505 14:21:08.333926 56262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0505 14:21:08.360867 56262 start.go:578] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 26.0.2
RuntimeApiVersion: v1
I0505 14:21:08.360957 56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0505 14:21:08.380536 56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0505 14:21:08.444390 56262 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
I0505 14:21:08.444441 56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
I0505 14:21:08.444833 56262 ssh_runner.go:195] Run: grep 192.169.0.1 host.minikube.internal$ /etc/hosts
I0505 14:21:08.449245 56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0505 14:21:08.459088 56262 kubeadm.go:877] updating cluster {Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:fal
se freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0505 14:21:08.459178 56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0505 14:21:08.459237 56262 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0505 14:21:08.472336 56262 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
ghcr.io/kube-vip/kube-vip:v0.7.1
registry.k8s.io/etcd:3.5.12-0
kindest/kindnetd:v20240202-8f1494ea
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0505 14:21:08.472348 56262 docker.go:615] Images already preloaded, skipping extraction
I0505 14:21:08.472419 56262 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0505 14:21:08.484264 56262 docker.go:685] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
ghcr.io/kube-vip/kube-vip:v0.7.1
registry.k8s.io/etcd:3.5.12-0
kindest/kindnetd:v20240202-8f1494ea
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28
-- /stdout --
I0505 14:21:08.484284 56262 cache_images.go:84] Images are preloaded, skipping loading
I0505 14:21:08.484299 56262 kubeadm.go:928] updating node { 192.169.0.51 8443 v1.30.0 docker true true} ...
I0505 14:21:08.484375 56262 kubeadm.go:940] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-671000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.51
[Install]
config:
{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0505 14:21:08.484439 56262 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0505 14:21:08.500967 56262 cni.go:84] Creating CNI manager for ""
I0505 14:21:08.500979 56262 cni.go:136] multinode detected (4 nodes found), recommending kindnet
I0505 14:21:08.500990 56262 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0505 14:21:08.501005 56262 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.51 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-671000 NodeName:ha-671000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0505 14:21:08.501088 56262 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.169.0.51
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "ha-671000"
kubeletExtraArgs:
node-ip: 192.169.0.51
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.169.0.51"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0505 14:21:08.501113 56262 kube-vip.go:111] generating kube-vip config ...
I0505 14:21:08.501162 56262 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0505 14:21:08.513119 56262 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
I0505 14:21:08.513193 56262 kube-vip.go:133] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.169.0.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.7.1
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0505 14:21:08.513250 56262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
I0505 14:21:08.521487 56262 binaries.go:44] Found k8s binaries, skipping transfer
I0505 14:21:08.521531 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
I0505 14:21:08.528952 56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
I0505 14:21:08.542487 56262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0505 14:21:08.556157 56262 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
I0505 14:21:08.570110 56262 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
I0505 14:21:08.584111 56262 ssh_runner.go:195] Run: grep 192.169.0.254 control-plane.minikube.internal$ /etc/hosts
I0505 14:21:08.586992 56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0505 14:21:08.596597 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:08.710024 56262 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0505 14:21:08.724251 56262 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000 for IP: 192.169.0.51
I0505 14:21:08.724262 56262 certs.go:194] generating shared ca certs ...
I0505 14:21:08.724272 56262 certs.go:226] acquiring lock for ca certs: {Name:mk4a4c4cb11dfd06f304e9c6007de9e5e149a466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0505 14:21:08.724457 56262 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key
I0505 14:21:08.724528 56262 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key
I0505 14:21:08.724539 56262 certs.go:256] generating profile certs ...
I0505 14:21:08.724648 56262 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key
I0505 14:21:08.724671 56262 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190
I0505 14:21:08.724686 56262 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.51 192.169.0.52 192.169.0.53 192.169.0.254]
I0505 14:21:08.826095 56262 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190 ...
I0505 14:21:08.826111 56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190: {Name:mk26b58616f2e9bcce56069037dda85d1d8c350c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0505 14:21:08.826754 56262 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190 ...
I0505 14:21:08.826765 56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190: {Name:mk7fc32008d240a4b7e6cb64bdeb1f596430582b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0505 14:21:08.826983 56262 certs.go:381] copying /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt
I0505 14:21:08.827192 56262 certs.go:385] copying /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key
I0505 14:21:08.827434 56262 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key
I0505 14:21:08.827443 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0505 14:21:08.827466 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0505 14:21:08.827487 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0505 14:21:08.827506 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0505 14:21:08.827523 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0505 14:21:08.827541 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0505 14:21:08.827559 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0505 14:21:08.827576 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0505 14:21:08.827667 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem (1338 bytes)
W0505 14:21:08.827718 56262 certs.go:480] ignoring /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210_empty.pem, impossibly tiny 0 bytes
I0505 14:21:08.827726 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem (1675 bytes)
I0505 14:21:08.827758 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem (1078 bytes)
I0505 14:21:08.827791 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem (1123 bytes)
I0505 14:21:08.827822 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem (1679 bytes)
I0505 14:21:08.827892 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem (1708 bytes)
I0505 14:21:08.827924 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0505 14:21:08.827970 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem -> /usr/share/ca-certificates/54210.pem
I0505 14:21:08.827988 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /usr/share/ca-certificates/542102.pem
I0505 14:21:08.828425 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0505 14:21:08.851250 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0505 14:21:08.872963 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0505 14:21:08.895079 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0505 14:21:08.922893 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
I0505 14:21:08.953937 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0505 14:21:08.983911 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0505 14:21:09.023252 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0505 14:21:09.070795 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0505 14:21:09.113576 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem --> /usr/share/ca-certificates/54210.pem (1338 bytes)
I0505 14:21:09.150037 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /usr/share/ca-certificates/542102.pem (1708 bytes)
I0505 14:21:09.170089 56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0505 14:21:09.184262 56262 ssh_runner.go:195] Run: openssl version
I0505 14:21:09.188637 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/542102.pem && ln -fs /usr/share/ca-certificates/542102.pem /etc/ssl/certs/542102.pem"
I0505 14:21:09.197186 56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/542102.pem
I0505 14:21:09.200763 56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 5 21:08 /usr/share/ca-certificates/542102.pem
I0505 14:21:09.200802 56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/542102.pem
I0505 14:21:09.205113 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/542102.pem /etc/ssl/certs/3ec20f2e.0"
I0505 14:21:09.213846 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0505 14:21:09.222459 56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0505 14:21:09.225992 56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 5 20:59 /usr/share/ca-certificates/minikubeCA.pem
I0505 14:21:09.226036 56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0505 14:21:09.230212 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0505 14:21:09.238744 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54210.pem && ln -fs /usr/share/ca-certificates/54210.pem /etc/ssl/certs/54210.pem"
I0505 14:21:09.247131 56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54210.pem
I0505 14:21:09.250641 56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 5 21:08 /usr/share/ca-certificates/54210.pem
I0505 14:21:09.250684 56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54210.pem
I0505 14:21:09.254933 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/54210.pem /etc/ssl/certs/51391683.0"
I0505 14:21:09.263283 56262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0505 14:21:09.266913 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0505 14:21:09.271690 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0505 14:21:09.276202 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0505 14:21:09.280723 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0505 14:21:09.285120 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0505 14:21:09.289468 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0505 14:21:09.293767 56262 kubeadm.go:391] StartCluster: {Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false
freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0505 14:21:09.293893 56262 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0505 14:21:09.305167 56262 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
W0505 14:21:09.312937 56262 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
I0505 14:21:09.312947 56262 kubeadm.go:407] found existing configuration files, will attempt cluster restart
I0505 14:21:09.312965 56262 kubeadm.go:587] restartPrimaryControlPlane start ...
I0505 14:21:09.313010 56262 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0505 14:21:09.320777 56262 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0505 14:21:09.321098 56262 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-671000" does not appear in /Users/jenkins/minikube-integration/18602-53665/kubeconfig
I0505 14:21:09.321183 56262 kubeconfig.go:62] /Users/jenkins/minikube-integration/18602-53665/kubeconfig needs updating (will repair): [kubeconfig missing "ha-671000" cluster setting kubeconfig missing "ha-671000" context setting]
I0505 14:21:09.321347 56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/kubeconfig: {Name:mk07bec02cc3957a2a8800c4412eef88581455ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0505 14:21:09.321996 56262 loader.go:395] Config loaded from file: /Users/jenkins/minikube-integration/18602-53665/kubeconfig
I0505 14:21:09.322179 56262 kapi.go:59] client config for ha-671000: &rest.Config{Host:"https://192.169.0.51:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x6257220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0505 14:21:09.322483 56262 cert_rotation.go:137] Starting client certificate rotation controller
I0505 14:21:09.322660 56262 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0505 14:21:09.330103 56262 kubeadm.go:624] The running cluster does not require reconfiguration: 192.169.0.51
I0505 14:21:09.330115 56262 kubeadm.go:591] duration metric: took 17.1285ms to restartPrimaryControlPlane
I0505 14:21:09.330120 56262 kubeadm.go:393] duration metric: took 36.320628ms to StartCluster
I0505 14:21:09.330127 56262 settings.go:142] acquiring lock: {Name:mk42961bbb846d74d4f3eb396c3a07b16222feb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0505 14:21:09.330217 56262 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/18602-53665/kubeconfig
I0505 14:21:09.330637 56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/kubeconfig: {Name:mk07bec02cc3957a2a8800c4412eef88581455ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0505 14:21:09.330863 56262 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0505 14:21:09.330875 56262 start.go:240] waiting for startup goroutines ...
I0505 14:21:09.330887 56262 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0505 14:21:09.373046 56262 out.go:177] * Enabled addons:
I0505 14:21:09.331023 56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:21:09.395270 56262 addons.go:510] duration metric: took 64.318856ms for enable addons: enabled=[]
I0505 14:21:09.395388 56262 start.go:245] waiting for cluster config update ...
I0505 14:21:09.395406 56262 start.go:254] writing updated cluster config ...
I0505 14:21:09.418289 56262 out.go:177]
I0505 14:21:09.439589 56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:21:09.439723 56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
I0505 14:21:09.462158 56262 out.go:177] * Starting "ha-671000-m02" control-plane node in "ha-671000" cluster
I0505 14:21:09.504016 56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0505 14:21:09.504076 56262 cache.go:56] Caching tarball of preloaded images
I0505 14:21:09.504246 56262 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0505 14:21:09.504264 56262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0505 14:21:09.504398 56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
I0505 14:21:09.505447 56262 start.go:360] acquireMachinesLock for ha-671000-m02: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0505 14:21:09.505557 56262 start.go:364] duration metric: took 85.865µs to acquireMachinesLock for "ha-671000-m02"
I0505 14:21:09.505582 56262 start.go:96] Skipping create...Using existing machine configuration
I0505 14:21:09.505589 56262 fix.go:54] fixHost starting: m02
I0505 14:21:09.506042 56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:21:09.506080 56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:21:09.515413 56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57896
I0505 14:21:09.515746 56262 main.go:141] libmachine: () Calling .GetVersion
I0505 14:21:09.516119 56262 main.go:141] libmachine: Using API Version 1
I0505 14:21:09.516136 56262 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:21:09.516414 56262 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:21:09.516555 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
I0505 14:21:09.516655 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetState
I0505 14:21:09.516736 56262 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:21:09.516805 56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid from json: 56210
I0505 14:21:09.517744 56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid 56210 missing from process table
I0505 14:21:09.517764 56262 fix.go:112] recreateIfNeeded on ha-671000-m02: state=Stopped err=<nil>
I0505 14:21:09.517774 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
W0505 14:21:09.517855 56262 fix.go:138] unexpected machine state, will restart: <nil>
I0505 14:21:09.539362 56262 out.go:177] * Restarting existing hyperkit VM for "ha-671000-m02" ...
I0505 14:21:09.581177 56262 main.go:141] libmachine: (ha-671000-m02) Calling .Start
I0505 14:21:09.581513 56262 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:21:09.581582 56262 main.go:141] libmachine: (ha-671000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid
I0505 14:21:09.583319 56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid 56210 missing from process table
I0505 14:21:09.583336 56262 main.go:141] libmachine: (ha-671000-m02) DBG | pid 56210 is in state "Stopped"
I0505 14:21:09.583361 56262 main.go:141] libmachine: (ha-671000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid...
I0505 14:21:09.583762 56262 main.go:141] libmachine: (ha-671000-m02) DBG | Using UUID 294bfc97-3e6f-4d68-b3f3-54381951a5e8
I0505 14:21:09.611765 56262 main.go:141] libmachine: (ha-671000-m02) DBG | Generated MAC 92:83:2c:36:f7:7d
I0505 14:21:09.611789 56262 main.go:141] libmachine: (ha-671000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
I0505 14:21:09.611924 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"294bfc97-3e6f-4d68-b3f3-54381951a5e8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b3e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0505 14:21:09.611964 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"294bfc97-3e6f-4d68-b3f3-54381951a5e8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b3e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0505 14:21:09.612015 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "294bfc97-3e6f-4d68-b3f3-54381951a5e8", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/ha-671000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/
machines/ha-671000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
I0505 14:21:09.612064 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 294bfc97-3e6f-4d68-b3f3-54381951a5e8 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/ha-671000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
I0505 14:21:09.612079 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0505 14:21:09.613498 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Pid is 56285
I0505 14:21:09.613935 56262 main.go:141] libmachine: (ha-671000-m02) DBG | Attempt 0
I0505 14:21:09.613949 56262 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:21:09.614012 56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid from json: 56285
I0505 14:21:09.615713 56262 main.go:141] libmachine: (ha-671000-m02) DBG | Searching for 92:83:2c:36:f7:7d in /var/db/dhcpd_leases ...
I0505 14:21:09.615841 56262 main.go:141] libmachine: (ha-671000-m02) DBG | Found 53 entries in /var/db/dhcpd_leases!
I0505 14:21:09.615860 56262 main.go:141] libmachine: (ha-671000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x663949ba}
I0505 14:21:09.615883 56262 main.go:141] libmachine: (ha-671000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x6637f817}
I0505 14:21:09.615897 56262 main.go:141] libmachine: (ha-671000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394976}
I0505 14:21:09.615905 56262 main.go:141] libmachine: (ha-671000-m02) DBG | Found match: 92:83:2c:36:f7:7d
I0505 14:21:09.615916 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetConfigRaw
I0505 14:21:09.615920 56262 main.go:141] libmachine: (ha-671000-m02) DBG | IP: 192.169.0.52
I0505 14:21:09.616579 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
I0505 14:21:09.616779 56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
I0505 14:21:09.617318 56262 machine.go:94] provisionDockerMachine start ...
I0505 14:21:09.617329 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
I0505 14:21:09.617443 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:09.617536 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:09.617633 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:09.617737 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:09.617836 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:09.617968 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:09.618123 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.52 22 <nil> <nil>}
I0505 14:21:09.618132 56262 main.go:141] libmachine: About to run SSH command:
hostname
I0505 14:21:09.621348 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0505 14:21:09.630281 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0505 14:21:09.631193 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0505 14:21:09.631218 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0505 14:21:09.631230 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0505 14:21:09.631252 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0505 14:21:10.019586 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0505 14:21:10.019603 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0505 14:21:10.134248 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0505 14:21:10.134266 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0505 14:21:10.134281 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0505 14:21:10.134292 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0505 14:21:10.135185 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0505 14:21:10.135199 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0505 14:21:15.419942 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0505 14:21:15.419970 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0505 14:21:15.419978 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0505 14:21:15.445269 56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
I0505 14:21:20.698093 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0505 14:21:20.698110 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetMachineName
I0505 14:21:20.698266 56262 buildroot.go:166] provisioning hostname "ha-671000-m02"
I0505 14:21:20.698277 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetMachineName
I0505 14:21:20.698366 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:20.698443 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:20.698518 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:20.698602 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:20.698696 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:20.698824 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:20.698977 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.52 22 <nil> <nil>}
I0505 14:21:20.698987 56262 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-671000-m02 && echo "ha-671000-m02" | sudo tee /etc/hostname
I0505 14:21:20.773304 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000-m02
I0505 14:21:20.773319 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:20.773451 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:20.773547 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:20.773625 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:20.773710 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:20.773837 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:20.773989 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.52 22 <nil> <nil>}
I0505 14:21:20.774000 56262 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-671000-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000-m02/g' /etc/hosts;
else
echo '127.0.1.1 ha-671000-m02' | sudo tee -a /etc/hosts;
fi
fi
I0505 14:21:20.846506 56262 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0505 14:21:20.846523 56262 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
I0505 14:21:20.846532 56262 buildroot.go:174] setting up certificates
I0505 14:21:20.846537 56262 provision.go:84] configureAuth start
I0505 14:21:20.846545 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetMachineName
I0505 14:21:20.846678 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
I0505 14:21:20.846753 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:20.846822 56262 provision.go:143] copyHostCerts
I0505 14:21:20.846847 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
I0505 14:21:20.846900 56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
I0505 14:21:20.846906 56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
I0505 14:21:20.847106 56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
I0505 14:21:20.847298 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
I0505 14:21:20.847327 56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
I0505 14:21:20.847332 56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
I0505 14:21:20.847414 56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
I0505 14:21:20.847555 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
I0505 14:21:20.847584 56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
I0505 14:21:20.847588 56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
I0505 14:21:20.847657 56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
I0505 14:21:20.847808 56262 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000-m02 san=[127.0.0.1 192.169.0.52 ha-671000-m02 localhost minikube]
I0505 14:21:20.923054 56262 provision.go:177] copyRemoteCerts
I0505 14:21:20.923102 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0505 14:21:20.923114 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:20.923242 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:20.923344 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:20.923432 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:20.923508 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
I0505 14:21:20.963007 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0505 14:21:20.963079 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0505 14:21:20.982214 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
I0505 14:21:20.982293 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0505 14:21:21.001587 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0505 14:21:21.001658 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0505 14:21:21.020765 56262 provision.go:87] duration metric: took 174.141582ms to configureAuth
I0505 14:21:21.020780 56262 buildroot.go:189] setting minikube options for container-runtime
I0505 14:21:21.020945 56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:21:21.020958 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
I0505 14:21:21.021085 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:21.021186 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:21.021280 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:21.021382 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:21.021493 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:21.021630 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:21.021764 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.52 22 <nil> <nil>}
I0505 14:21:21.021777 56262 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0505 14:21:21.088593 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0505 14:21:21.088605 56262 buildroot.go:70] root file system type: tmpfs
I0505 14:21:21.088686 56262 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0505 14:21:21.088698 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:21.088827 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:21.088944 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:21.089047 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:21.089155 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:21.089299 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:21.089434 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.52 22 <nil> <nil>}
I0505 14:21:21.089481 56262 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.169.0.51"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0505 14:21:21.165319 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.169.0.51
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0505 14:21:21.165336 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:21.165469 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:21.165561 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:21.165660 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:21.165755 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:21.165892 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:21.166034 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.52 22 <nil> <nil>}
I0505 14:21:21.166046 56262 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0505 14:21:22.810399 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0505 14:21:22.810414 56262 machine.go:97] duration metric: took 13.184745912s to provisionDockerMachine
I0505 14:21:22.810422 56262 start.go:293] postStartSetup for "ha-671000-m02" (driver="hyperkit")
I0505 14:21:22.810435 56262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0505 14:21:22.810448 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
I0505 14:21:22.810630 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0505 14:21:22.810642 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:22.810731 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:22.810813 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:22.810958 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:22.811059 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
I0505 14:21:22.854108 56262 ssh_runner.go:195] Run: cat /etc/os-release
I0505 14:21:22.857587 56262 info.go:137] Remote host: Buildroot 2023.02.9
I0505 14:21:22.857599 56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
I0505 14:21:22.857687 56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
I0505 14:21:22.857827 56262 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
I0505 14:21:22.857833 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
I0505 14:21:22.857984 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0505 14:21:22.870076 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
I0505 14:21:22.896680 56262 start.go:296] duration metric: took 86.209325ms for postStartSetup
I0505 14:21:22.896713 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
I0505 14:21:22.896900 56262 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0505 14:21:22.896916 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:22.897010 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:22.897116 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:22.897207 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:22.897282 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
I0505 14:21:22.937842 56262 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0505 14:21:22.937898 56262 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0505 14:21:22.971365 56262 fix.go:56] duration metric: took 13.45726146s for fixHost
I0505 14:21:22.971396 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:22.971537 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:22.971639 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:22.971717 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:22.971804 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:22.971961 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:21:22.972106 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.52 22 <nil> <nil>}
I0505 14:21:22.972117 56262 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0505 14:21:23.038093 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944083.052286945
I0505 14:21:23.038109 56262 fix.go:216] guest clock: 1714944083.052286945
I0505 14:21:23.038115 56262 fix.go:229] Guest: 2024-05-05 14:21:23.052286945 -0700 PDT Remote: 2024-05-05 14:21:22.971379 -0700 PDT m=+34.042274957 (delta=80.907945ms)
I0505 14:21:23.038125 56262 fix.go:200] guest clock delta is within tolerance: 80.907945ms
I0505 14:21:23.038129 56262 start.go:83] releasing machines lock for "ha-671000-m02", held for 13.524025366s
I0505 14:21:23.038145 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
I0505 14:21:23.038286 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
I0505 14:21:23.061518 56262 out.go:177] * Found network options:
I0505 14:21:23.083843 56262 out.go:177] - NO_PROXY=192.169.0.51
W0505 14:21:23.105432 56262 proxy.go:119] fail to check proxy env: Error ip not in block
I0505 14:21:23.105470 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
I0505 14:21:23.106334 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
I0505 14:21:23.106599 56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
I0505 14:21:23.106711 56262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0505 14:21:23.106753 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
W0505 14:21:23.106918 56262 proxy.go:119] fail to check proxy env: Error ip not in block
I0505 14:21:23.107013 56262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0505 14:21:23.107023 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:23.107033 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
I0505 14:21:23.107244 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:23.107275 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
I0505 14:21:23.107414 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
I0505 14:21:23.107468 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:23.107556 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
I0505 14:21:23.107590 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
I0505 14:21:23.107700 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
W0505 14:21:23.143066 56262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0505 14:21:23.143128 56262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0505 14:21:23.312270 56262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0505 14:21:23.312288 56262 start.go:494] detecting cgroup driver to use...
I0505 14:21:23.312377 56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0505 14:21:23.327567 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0505 14:21:23.336186 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0505 14:21:23.344528 56262 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0505 14:21:23.344575 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0505 14:21:23.352890 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0505 14:21:23.361005 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0505 14:21:23.369046 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0505 14:21:23.377280 56262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0505 14:21:23.385827 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0505 14:21:23.394012 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0505 14:21:23.402113 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0505 14:21:23.410536 56262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0505 14:21:23.418126 56262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0505 14:21:23.425500 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:23.526138 56262 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0505 14:21:23.544818 56262 start.go:494] detecting cgroup driver to use...
I0505 14:21:23.544892 56262 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0505 14:21:23.559895 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0505 14:21:23.572081 56262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0505 14:21:23.584840 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0505 14:21:23.595478 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0505 14:21:23.606028 56262 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0505 14:21:23.632278 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0505 14:21:23.643848 56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0505 14:21:23.658675 56262 ssh_runner.go:195] Run: which cri-dockerd
I0505 14:21:23.661665 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0505 14:21:23.669850 56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0505 14:21:23.683220 56262 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0505 14:21:23.786303 56262 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0505 14:21:23.893788 56262 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0505 14:21:23.893809 56262 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0505 14:21:23.908293 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:24.010074 56262 ssh_runner.go:195] Run: sudo systemctl restart docker
I0505 14:21:26.298709 56262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.287835945s)
I0505 14:21:26.298771 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0505 14:21:26.310190 56262 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I0505 14:21:26.324652 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0505 14:21:26.336377 56262 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0505 14:21:26.435974 56262 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0505 14:21:26.534723 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:26.647643 56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0505 14:21:26.661375 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0505 14:21:26.672706 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:26.778709 56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0505 14:21:26.840618 56262 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0505 14:21:26.840697 56262 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0505 14:21:26.844919 56262 start.go:562] Will wait 60s for crictl version
I0505 14:21:26.844974 56262 ssh_runner.go:195] Run: which crictl
I0505 14:21:26.849165 56262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0505 14:21:26.874329 56262 start.go:578] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 26.0.2
RuntimeApiVersion: v1
I0505 14:21:26.874403 56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0505 14:21:26.890208 56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0505 14:21:26.929797 56262 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
I0505 14:21:26.949648 56262 out.go:177] - env NO_PROXY=192.169.0.51
I0505 14:21:26.970782 56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
I0505 14:21:26.971166 56262 ssh_runner.go:195] Run: grep 192.169.0.1 host.minikube.internal$ /etc/hosts
I0505 14:21:26.975958 56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0505 14:21:26.985550 56262 mustload.go:65] Loading cluster: ha-671000
I0505 14:21:26.985727 56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:21:26.985939 56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:21:26.985954 56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:21:26.994516 56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57918
I0505 14:21:26.994869 56262 main.go:141] libmachine: () Calling .GetVersion
I0505 14:21:26.995203 56262 main.go:141] libmachine: Using API Version 1
I0505 14:21:26.995220 56262 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:21:26.995417 56262 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:21:26.995536 56262 main.go:141] libmachine: (ha-671000) Calling .GetState
I0505 14:21:26.995629 56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:21:26.995703 56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 56275
I0505 14:21:26.996652 56262 host.go:66] Checking if "ha-671000" exists ...
I0505 14:21:26.996892 56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:21:26.996917 56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:21:27.005463 56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57920
I0505 14:21:27.005786 56262 main.go:141] libmachine: () Calling .GetVersion
I0505 14:21:27.006124 56262 main.go:141] libmachine: Using API Version 1
I0505 14:21:27.006142 56262 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:21:27.006378 56262 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:21:27.006493 56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
I0505 14:21:27.006597 56262 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000 for IP: 192.169.0.52
I0505 14:21:27.006603 56262 certs.go:194] generating shared ca certs ...
I0505 14:21:27.006614 56262 certs.go:226] acquiring lock for ca certs: {Name:mk4a4c4cb11dfd06f304e9c6007de9e5e149a466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0505 14:21:27.006755 56262 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key
I0505 14:21:27.006813 56262 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key
I0505 14:21:27.006821 56262 certs.go:256] generating profile certs ...
I0505 14:21:27.006913 56262 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key
I0505 14:21:27.006999 56262 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e823369f
I0505 14:21:27.007048 56262 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key
I0505 14:21:27.007055 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0505 14:21:27.007075 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0505 14:21:27.007095 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0505 14:21:27.007113 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0505 14:21:27.007130 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0505 14:21:27.007151 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0505 14:21:27.007170 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0505 14:21:27.007187 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0505 14:21:27.007262 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem (1338 bytes)
W0505 14:21:27.007299 56262 certs.go:480] ignoring /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210_empty.pem, impossibly tiny 0 bytes
I0505 14:21:27.007308 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem (1675 bytes)
I0505 14:21:27.007341 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem (1078 bytes)
I0505 14:21:27.007375 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem (1123 bytes)
I0505 14:21:27.007408 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem (1679 bytes)
I0505 14:21:27.007476 56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem (1708 bytes)
I0505 14:21:27.007517 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /usr/share/ca-certificates/542102.pem
I0505 14:21:27.007538 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0505 14:21:27.007556 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem -> /usr/share/ca-certificates/54210.pem
I0505 14:21:27.007581 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
I0505 14:21:27.007663 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
I0505 14:21:27.007746 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
I0505 14:21:27.007820 56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
I0505 14:21:27.007907 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
I0505 14:21:27.036107 56262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
I0505 14:21:27.039382 56262 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
I0505 14:21:27.047195 56262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
I0505 14:21:27.050362 56262 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
I0505 14:21:27.058524 56262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
I0505 14:21:27.061585 56262 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
I0505 14:21:27.069461 56262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
I0505 14:21:27.072439 56262 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
I0505 14:21:27.080982 56262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
I0505 14:21:27.084070 56262 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
I0505 14:21:27.092062 56262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
I0505 14:21:27.095149 56262 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
I0505 14:21:27.103105 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0505 14:21:27.123887 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0505 14:21:27.144018 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0505 14:21:27.164034 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0505 14:21:27.183960 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
I0505 14:21:27.204170 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0505 14:21:27.224085 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0505 14:21:27.244379 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0505 14:21:27.264411 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /usr/share/ca-certificates/542102.pem (1708 bytes)
I0505 14:21:27.283983 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0505 14:21:27.303697 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem --> /usr/share/ca-certificates/54210.pem (1338 bytes)
I0505 14:21:27.323613 56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
I0505 14:21:27.337907 56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
I0505 14:21:27.351842 56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
I0505 14:21:27.365462 56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
I0505 14:21:27.379337 56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
I0505 14:21:27.393337 56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
I0505 14:21:27.406867 56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
I0505 14:21:27.420462 56262 ssh_runner.go:195] Run: openssl version
I0505 14:21:27.425063 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/542102.pem && ln -fs /usr/share/ca-certificates/542102.pem /etc/ssl/certs/542102.pem"
I0505 14:21:27.433747 56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/542102.pem
I0505 14:21:27.437275 56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 5 21:08 /usr/share/ca-certificates/542102.pem
I0505 14:21:27.437314 56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/542102.pem
I0505 14:21:27.441663 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/542102.pem /etc/ssl/certs/3ec20f2e.0"
I0505 14:21:27.450070 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0505 14:21:27.458559 56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0505 14:21:27.462027 56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 5 20:59 /usr/share/ca-certificates/minikubeCA.pem
I0505 14:21:27.462088 56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0505 14:21:27.466402 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0505 14:21:27.474903 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54210.pem && ln -fs /usr/share/ca-certificates/54210.pem /etc/ssl/certs/54210.pem"
I0505 14:21:27.484026 56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54210.pem
I0505 14:21:27.487471 56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 5 21:08 /usr/share/ca-certificates/54210.pem
I0505 14:21:27.487506 56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54210.pem
I0505 14:21:27.491806 56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/54210.pem /etc/ssl/certs/51391683.0"
I0505 14:21:27.500356 56262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0505 14:21:27.503912 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0505 14:21:27.508255 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0505 14:21:27.512583 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0505 14:21:27.516997 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0505 14:21:27.521261 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0505 14:21:27.525514 56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0505 14:21:27.529849 56262 kubeadm.go:928] updating node {m02 192.169.0.52 8443 v1.30.0 docker true true} ...
I0505 14:21:27.529904 56262 kubeadm.go:940] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-671000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.52
[Install]
config:
{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0505 14:21:27.529918 56262 kube-vip.go:111] generating kube-vip config ...
I0505 14:21:27.529952 56262 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0505 14:21:27.542376 56262 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
I0505 14:21:27.542421 56262 kube-vip.go:133] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 192.169.0.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.7.1
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0505 14:21:27.542477 56262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
I0505 14:21:27.550208 56262 binaries.go:44] Found k8s binaries, skipping transfer
I0505 14:21:27.550254 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
I0505 14:21:27.557751 56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0505 14:21:27.571295 56262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0505 14:21:27.584791 56262 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
I0505 14:21:27.598438 56262 ssh_runner.go:195] Run: grep 192.169.0.254 control-plane.minikube.internal$ /etc/hosts
I0505 14:21:27.601396 56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0505 14:21:27.610834 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:27.705062 56262 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0505 14:21:27.720000 56262 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0505 14:21:27.761967 56262 out.go:177] * Verifying Kubernetes components...
I0505 14:21:27.720191 56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:21:27.783193 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:21:27.916127 56262 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0505 14:21:27.937011 56262 loader.go:395] Config loaded from file: /Users/jenkins/minikube-integration/18602-53665/kubeconfig
I0505 14:21:27.937198 56262 kapi.go:59] client config for ha-671000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x6257220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
W0505 14:21:27.937233 56262 kubeadm.go:477] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.51:8443
I0505 14:21:27.937400 56262 node_ready.go:35] waiting up to 6m0s for node "ha-671000-m02" to be "Ready" ...
I0505 14:21:27.937478 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:27.937483 56262 round_trippers.go:469] Request Headers:
I0505 14:21:27.937491 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:27.937495 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.141758 56262 round_trippers.go:574] Response Status: 200 OK in 9202 milliseconds
I0505 14:21:37.151494 56262 node_ready.go:49] node "ha-671000-m02" has status "Ready":"True"
I0505 14:21:37.151510 56262 node_ready.go:38] duration metric: took 9.212150687s for node "ha-671000-m02" to be "Ready" ...
I0505 14:21:37.151520 56262 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0505 14:21:37.151577 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
I0505 14:21:37.151583 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.151590 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.151594 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.191750 56262 round_trippers.go:574] Response Status: 200 OK in 40 milliseconds
I0505 14:21:37.198443 56262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.198500 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
I0505 14:21:37.198504 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.198511 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.198515 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.209480 56262 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
I0505 14:21:37.210158 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:37.210166 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.210172 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.210175 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.218742 56262 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0505 14:21:37.219086 56262 pod_ready.go:92] pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:37.219096 56262 pod_ready.go:81] duration metric: took 20.63356ms for pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.219105 56262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.219148 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kjf54
I0505 14:21:37.219153 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.219162 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.219170 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.221463 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:37.221880 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:37.221889 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.221897 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.221905 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.226727 56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0505 14:21:37.227035 56262 pod_ready.go:92] pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:37.227045 56262 pod_ready.go:81] duration metric: took 7.931899ms for pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.227052 56262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.227120 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000
I0505 14:21:37.227125 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.227131 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.227135 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.228755 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:37.229130 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:37.229137 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.229143 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.229147 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.230595 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:37.230887 56262 pod_ready.go:92] pod "etcd-ha-671000" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:37.230895 56262 pod_ready.go:81] duration metric: took 3.837029ms for pod "etcd-ha-671000" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.230901 56262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.230929 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000-m02
I0505 14:21:37.230934 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.230939 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.230943 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.232448 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:37.232868 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:37.232875 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.232880 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.232887 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.234369 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:37.234695 56262 pod_ready.go:92] pod "etcd-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:37.234704 56262 pod_ready.go:81] duration metric: took 3.797599ms for pod "etcd-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.234710 56262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.234742 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000-m03
I0505 14:21:37.234747 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.234753 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.234760 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.236183 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:37.351671 56262 request.go:629] Waited for 115.086464ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
I0505 14:21:37.351703 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
I0505 14:21:37.351742 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.351749 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.351752 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.353285 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:37.353602 56262 pod_ready.go:92] pod "etcd-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:37.353612 56262 pod_ready.go:81] duration metric: took 118.878942ms for pod "etcd-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.353624 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.551816 56262 request.go:629] Waited for 198.124765ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000
I0505 14:21:37.551893 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000
I0505 14:21:37.551900 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.551906 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.551909 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.554076 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:37.753242 56262 request.go:629] Waited for 198.55091ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:37.753343 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:37.753355 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.753365 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.753371 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.756033 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:37.756647 56262 pod_ready.go:92] pod "kube-apiserver-ha-671000" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:37.756662 56262 pod_ready.go:81] duration metric: took 402.967586ms for pod "kube-apiserver-ha-671000" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.756670 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
I0505 14:21:37.952604 56262 request.go:629] Waited for 195.869842ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m02
I0505 14:21:37.952645 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m02
I0505 14:21:37.952654 56262 round_trippers.go:469] Request Headers:
I0505 14:21:37.952662 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:37.952668 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:37.954903 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:38.151783 56262 request.go:629] Waited for 196.293382ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:38.151830 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:38.151837 56262 round_trippers.go:469] Request Headers:
I0505 14:21:38.151842 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:38.151847 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:38.156373 56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0505 14:21:38.156768 56262 pod_ready.go:92] pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:38.156778 56262 pod_ready.go:81] duration metric: took 400.046736ms for pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
I0505 14:21:38.156785 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
I0505 14:21:38.351807 56262 request.go:629] Waited for 194.95401ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m03
I0505 14:21:38.351854 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m03
I0505 14:21:38.351862 56262 round_trippers.go:469] Request Headers:
I0505 14:21:38.351904 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:38.351908 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:38.354097 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:38.552842 56262 request.go:629] Waited for 198.080217ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
I0505 14:21:38.552968 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
I0505 14:21:38.552980 56262 round_trippers.go:469] Request Headers:
I0505 14:21:38.552990 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:38.552997 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:38.555719 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:38.556135 56262 pod_ready.go:92] pod "kube-apiserver-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:38.556146 56262 pod_ready.go:81] duration metric: took 399.298154ms for pod "kube-apiserver-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
I0505 14:21:38.556153 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000" in "kube-system" namespace to be "Ready" ...
I0505 14:21:38.752061 56262 request.go:629] Waited for 195.828299ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000
I0505 14:21:38.752126 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000
I0505 14:21:38.752135 56262 round_trippers.go:469] Request Headers:
I0505 14:21:38.752148 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:38.752158 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:38.754957 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:38.951929 56262 request.go:629] Waited for 196.315529ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:38.951959 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:38.951964 56262 round_trippers.go:469] Request Headers:
I0505 14:21:38.951969 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:38.951973 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:38.953886 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:38.954275 56262 pod_ready.go:92] pod "kube-controller-manager-ha-671000" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:38.954284 56262 pod_ready.go:81] duration metric: took 398.072724ms for pod "kube-controller-manager-ha-671000" in "kube-system" namespace to be "Ready" ...
I0505 14:21:38.954297 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
I0505 14:21:39.151925 56262 request.go:629] Waited for 197.547759ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:39.152007 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:39.152019 56262 round_trippers.go:469] Request Headers:
I0505 14:21:39.152025 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:39.152029 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:39.157962 56262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0505 14:21:39.352575 56262 request.go:629] Waited for 194.147234ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:39.352619 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:39.352625 56262 round_trippers.go:469] Request Headers:
I0505 14:21:39.352631 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:39.352635 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:39.356708 56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0505 14:21:39.553301 56262 request.go:629] Waited for 97.737035ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:39.553334 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:39.553340 56262 round_trippers.go:469] Request Headers:
I0505 14:21:39.553346 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:39.553351 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:39.555371 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:39.752052 56262 request.go:629] Waited for 196.251955ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:39.752134 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:39.752145 56262 round_trippers.go:469] Request Headers:
I0505 14:21:39.752153 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:39.752158 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:39.754627 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:39.955025 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:39.955059 56262 round_trippers.go:469] Request Headers:
I0505 14:21:39.955067 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:39.955072 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:39.956871 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:40.152049 56262 request.go:629] Waited for 194.641301ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:40.152132 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:40.152171 56262 round_trippers.go:469] Request Headers:
I0505 14:21:40.152184 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:40.152191 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:40.154660 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:40.456022 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:40.456041 56262 round_trippers.go:469] Request Headers:
I0505 14:21:40.456050 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:40.456056 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:40.458617 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:40.552124 56262 request.go:629] Waited for 92.99221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:40.552206 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:40.552212 56262 round_trippers.go:469] Request Headers:
I0505 14:21:40.552220 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:40.552225 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:40.554220 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:40.956144 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:40.956162 56262 round_trippers.go:469] Request Headers:
I0505 14:21:40.956168 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:40.956172 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:40.958759 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:40.959215 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:40.959223 56262 round_trippers.go:469] Request Headers:
I0505 14:21:40.959229 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:40.959232 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:40.960907 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:40.961228 56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
I0505 14:21:41.455646 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:41.455689 56262 round_trippers.go:469] Request Headers:
I0505 14:21:41.455698 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:41.455722 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:41.457872 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:41.458331 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:41.458339 56262 round_trippers.go:469] Request Headers:
I0505 14:21:41.458344 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:41.458355 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:41.460082 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:41.955474 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:41.955516 56262 round_trippers.go:469] Request Headers:
I0505 14:21:41.955524 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:41.955528 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:41.957597 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:41.958178 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:41.958186 56262 round_trippers.go:469] Request Headers:
I0505 14:21:41.958190 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:41.958193 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:41.960269 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:42.454954 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:42.454969 56262 round_trippers.go:469] Request Headers:
I0505 14:21:42.454975 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:42.454978 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:42.456939 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:42.457382 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:42.457390 56262 round_trippers.go:469] Request Headers:
I0505 14:21:42.457395 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:42.457398 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:42.459026 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:42.955443 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:42.955465 56262 round_trippers.go:469] Request Headers:
I0505 14:21:42.955493 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:42.955500 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:42.957908 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:42.958355 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:42.958362 56262 round_trippers.go:469] Request Headers:
I0505 14:21:42.958368 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:42.958371 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:42.959853 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:43.455723 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:43.455776 56262 round_trippers.go:469] Request Headers:
I0505 14:21:43.455798 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:43.455806 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:43.458560 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:43.458997 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:43.459004 56262 round_trippers.go:469] Request Headers:
I0505 14:21:43.459009 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:43.459013 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:43.460509 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:43.460811 56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
I0505 14:21:43.955429 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:43.955470 56262 round_trippers.go:469] Request Headers:
I0505 14:21:43.955481 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:43.955487 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:43.957836 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:43.958298 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:43.958305 56262 round_trippers.go:469] Request Headers:
I0505 14:21:43.958310 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:43.958320 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:43.960083 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:44.455061 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:44.455081 56262 round_trippers.go:469] Request Headers:
I0505 14:21:44.455088 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:44.455091 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:44.458998 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:44.459504 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:44.459511 56262 round_trippers.go:469] Request Headers:
I0505 14:21:44.459517 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:44.459521 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:44.461518 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:44.956537 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:44.956577 56262 round_trippers.go:469] Request Headers:
I0505 14:21:44.956598 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:44.956604 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:44.959253 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:44.959715 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:44.959723 56262 round_trippers.go:469] Request Headers:
I0505 14:21:44.959729 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:44.959733 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:44.961411 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:45.455377 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:45.455402 56262 round_trippers.go:469] Request Headers:
I0505 14:21:45.455414 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:45.455420 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:45.458080 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:45.458718 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:45.458729 56262 round_trippers.go:469] Request Headers:
I0505 14:21:45.458736 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:45.458752 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:45.463742 56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0505 14:21:45.464348 56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
I0505 14:21:45.955580 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:45.955620 56262 round_trippers.go:469] Request Headers:
I0505 14:21:45.955630 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:45.955635 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:45.957968 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:45.958442 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:45.958449 56262 round_trippers.go:469] Request Headers:
I0505 14:21:45.958455 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:45.958466 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:45.959999 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:46.457118 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:46.457136 56262 round_trippers.go:469] Request Headers:
I0505 14:21:46.457145 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:46.457149 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:46.459543 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:46.460023 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:46.460031 56262 round_trippers.go:469] Request Headers:
I0505 14:21:46.460036 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:46.460047 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:46.461647 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:46.956302 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:46.956318 56262 round_trippers.go:469] Request Headers:
I0505 14:21:46.956324 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:46.956326 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:46.958416 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:46.958859 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:46.958866 56262 round_trippers.go:469] Request Headers:
I0505 14:21:46.958872 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:46.958874 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:46.960501 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:47.456753 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:47.456797 56262 round_trippers.go:469] Request Headers:
I0505 14:21:47.456806 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:47.456812 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:47.458891 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:47.459328 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:47.459336 56262 round_trippers.go:469] Request Headers:
I0505 14:21:47.459342 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:47.459345 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:47.460911 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:47.955503 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:47.955545 56262 round_trippers.go:469] Request Headers:
I0505 14:21:47.955558 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:47.955564 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:47.959575 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:47.960158 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:47.960166 56262 round_trippers.go:469] Request Headers:
I0505 14:21:47.960171 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:47.960175 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:47.961799 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:47.962164 56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
I0505 14:21:48.456730 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
I0505 14:21:48.456747 56262 round_trippers.go:469] Request Headers:
I0505 14:21:48.456753 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:48.456757 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:48.460539 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:48.461047 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:48.461055 56262 round_trippers.go:469] Request Headers:
I0505 14:21:48.461061 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:48.461064 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:48.465508 56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0505 14:21:48.465989 56262 pod_ready.go:92] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:48.465998 56262 pod_ready.go:81] duration metric: took 9.510763792s for pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
I0505 14:21:48.466006 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
I0505 14:21:48.466042 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m03
I0505 14:21:48.466047 56262 round_trippers.go:469] Request Headers:
I0505 14:21:48.466052 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:48.466055 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:48.472370 56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0505 14:21:48.473005 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
I0505 14:21:48.473012 56262 round_trippers.go:469] Request Headers:
I0505 14:21:48.473017 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:48.473020 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:48.481996 56262 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0505 14:21:48.482501 56262 pod_ready.go:92] pod "kube-controller-manager-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:48.482510 56262 pod_ready.go:81] duration metric: took 16.497528ms for pod "kube-controller-manager-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
I0505 14:21:48.482517 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5jwqs" in "kube-system" namespace to be "Ready" ...
I0505 14:21:48.482551 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:48.482556 56262 round_trippers.go:469] Request Headers:
I0505 14:21:48.482561 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:48.482565 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:48.490468 56262 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0505 14:21:48.491138 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:48.491145 56262 round_trippers.go:469] Request Headers:
I0505 14:21:48.491151 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:48.491155 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:48.494380 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:48.983087 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:49.004024 56262 round_trippers.go:469] Request Headers:
I0505 14:21:49.004031 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:49.004035 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:49.006380 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:49.007016 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:49.007024 56262 round_trippers.go:469] Request Headers:
I0505 14:21:49.007030 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:49.007033 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:49.008914 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:49.483919 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:49.483931 56262 round_trippers.go:469] Request Headers:
I0505 14:21:49.483938 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:49.483941 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:49.486104 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:49.486673 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:49.486681 56262 round_trippers.go:469] Request Headers:
I0505 14:21:49.486687 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:49.486691 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:49.488609 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:49.983081 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:49.983096 56262 round_trippers.go:469] Request Headers:
I0505 14:21:49.983104 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:49.983108 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:49.985873 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:49.986420 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:49.986428 56262 round_trippers.go:469] Request Headers:
I0505 14:21:49.986434 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:49.986437 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:49.988349 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:50.482957 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:50.482970 56262 round_trippers.go:469] Request Headers:
I0505 14:21:50.482976 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:50.482980 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:50.485479 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:50.485920 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:50.485927 56262 round_trippers.go:469] Request Headers:
I0505 14:21:50.485934 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:50.485938 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:50.487720 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:50.488107 56262 pod_ready.go:102] pod "kube-proxy-5jwqs" in "kube-system" namespace has status "Ready":"False"
I0505 14:21:50.983210 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:50.983225 56262 round_trippers.go:469] Request Headers:
I0505 14:21:50.983232 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:50.983236 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:50.986255 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:50.986840 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:50.986849 56262 round_trippers.go:469] Request Headers:
I0505 14:21:50.986855 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:50.986866 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:50.989948 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:51.483355 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:51.483374 56262 round_trippers.go:469] Request Headers:
I0505 14:21:51.483388 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:51.483395 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:51.486820 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:51.487280 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:51.487287 56262 round_trippers.go:469] Request Headers:
I0505 14:21:51.487293 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:51.487297 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:51.489325 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:51.983090 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:51.983105 56262 round_trippers.go:469] Request Headers:
I0505 14:21:51.983112 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:51.983115 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:51.984988 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:51.985393 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:51.985401 56262 round_trippers.go:469] Request Headers:
I0505 14:21:51.985405 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:51.985410 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:51.986930 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:52.484493 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:52.484507 56262 round_trippers.go:469] Request Headers:
I0505 14:21:52.484516 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:52.484521 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:52.487250 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:52.487686 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:52.487694 56262 round_trippers.go:469] Request Headers:
I0505 14:21:52.487698 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:52.487702 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:52.489501 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:52.489895 56262 pod_ready.go:102] pod "kube-proxy-5jwqs" in "kube-system" namespace has status "Ready":"False"
I0505 14:21:52.983025 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:52.983048 56262 round_trippers.go:469] Request Headers:
I0505 14:21:52.983059 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:52.983066 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:52.986110 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:52.986621 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:52.986629 56262 round_trippers.go:469] Request Headers:
I0505 14:21:52.986634 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:52.986639 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:52.988098 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:53.484742 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:53.484762 56262 round_trippers.go:469] Request Headers:
I0505 14:21:53.484773 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:53.484779 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:53.488010 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:53.488477 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:53.488487 56262 round_trippers.go:469] Request Headers:
I0505 14:21:53.488495 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:53.488501 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:53.490598 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:53.982981 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:54.035555 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.035577 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.035582 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:54.038056 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:54.038420 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:54.038427 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.038431 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.038436 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:54.040740 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:54.483231 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
I0505 14:21:54.483250 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.483259 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.483268 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:54.486904 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:54.487432 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:21:54.487440 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.487445 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:54.487453 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.489085 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:54.489450 56262 pod_ready.go:92] pod "kube-proxy-5jwqs" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:54.489459 56262 pod_ready.go:81] duration metric: took 6.006607245s for pod "kube-proxy-5jwqs" in "kube-system" namespace to be "Ready" ...
I0505 14:21:54.489472 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b45s6" in "kube-system" namespace to be "Ready" ...
I0505 14:21:54.489506 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b45s6
I0505 14:21:54.489511 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.489516 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.489520 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:54.491341 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:54.492125 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m04
I0505 14:21:54.492155 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.492161 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.492166 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:54.494017 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:54.494387 56262 pod_ready.go:92] pod "kube-proxy-b45s6" in "kube-system" namespace has status "Ready":"True"
I0505 14:21:54.494395 56262 pod_ready.go:81] duration metric: took 4.917824ms for pod "kube-proxy-b45s6" in "kube-system" namespace to be "Ready" ...
I0505 14:21:54.494401 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kppdj" in "kube-system" namespace to be "Ready" ...
I0505 14:21:54.494436 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:54.494441 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.494447 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.494452 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:54.496166 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:54.496620 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:54.496627 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.496633 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.496637 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:54.498306 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:54.996074 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:54.996123 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.996136 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.996145 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:54.999201 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:54.999706 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:54.999714 56262 round_trippers.go:469] Request Headers:
I0505 14:21:54.999720 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:54.999724 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:55.001519 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:55.495423 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:55.495482 56262 round_trippers.go:469] Request Headers:
I0505 14:21:55.495494 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:55.495500 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:55.498280 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:55.498730 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:55.498738 56262 round_trippers.go:469] Request Headers:
I0505 14:21:55.498744 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:55.498748 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:55.500462 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:55.995317 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:55.995337 56262 round_trippers.go:469] Request Headers:
I0505 14:21:55.995349 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:55.995356 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:55.998789 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:55.999222 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:55.999231 56262 round_trippers.go:469] Request Headers:
I0505 14:21:55.999238 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:55.999241 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:56.001041 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:56.494888 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:56.494946 56262 round_trippers.go:469] Request Headers:
I0505 14:21:56.494958 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:56.494968 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:56.497790 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:56.498347 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:56.498358 56262 round_trippers.go:469] Request Headers:
I0505 14:21:56.498365 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:56.498371 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:56.500278 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:56.500656 56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
I0505 14:21:56.994875 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:56.994892 56262 round_trippers.go:469] Request Headers:
I0505 14:21:56.994900 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:56.994906 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:56.998618 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:56.999206 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:56.999214 56262 round_trippers.go:469] Request Headers:
I0505 14:21:56.999220 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:56.999223 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:57.000855 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:57.495334 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:57.495358 56262 round_trippers.go:469] Request Headers:
I0505 14:21:57.495370 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:57.495375 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:57.498502 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:57.498951 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:57.498958 56262 round_trippers.go:469] Request Headers:
I0505 14:21:57.498963 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:57.498966 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:57.500746 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:57.995520 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:57.995543 56262 round_trippers.go:469] Request Headers:
I0505 14:21:57.995579 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:57.995598 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:57.998407 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:57.998972 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:57.998979 56262 round_trippers.go:469] Request Headers:
I0505 14:21:57.998985 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:57.999001 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:58.000625 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:58.495031 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:58.495049 56262 round_trippers.go:469] Request Headers:
I0505 14:21:58.495061 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:58.495067 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:58.498099 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:58.498667 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:58.498677 56262 round_trippers.go:469] Request Headers:
I0505 14:21:58.498685 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:58.498691 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:58.500315 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:58.995219 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:59.001733 56262 round_trippers.go:469] Request Headers:
I0505 14:21:59.001744 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:59.001750 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:59.004276 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:59.004776 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:59.004783 56262 round_trippers.go:469] Request Headers:
I0505 14:21:59.004788 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:59.004792 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:59.006346 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:21:59.006731 56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
I0505 14:21:59.495209 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:59.495224 56262 round_trippers.go:469] Request Headers:
I0505 14:21:59.495243 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:59.495269 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:59.498470 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:59.498897 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:59.498905 56262 round_trippers.go:469] Request Headers:
I0505 14:21:59.498911 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:59.498915 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:59.501440 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:21:59.995151 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:21:59.995179 56262 round_trippers.go:469] Request Headers:
I0505 14:21:59.995191 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:59.995198 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:21:59.998453 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:21:59.999020 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:21:59.999031 56262 round_trippers.go:469] Request Headers:
I0505 14:21:59.999039 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:21:59.999043 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:00.000983 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:00.495135 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:00.495148 56262 round_trippers.go:469] Request Headers:
I0505 14:22:00.495154 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:00.495158 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:00.498254 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:22:00.499175 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:00.499184 56262 round_trippers.go:469] Request Headers:
I0505 14:22:00.499190 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:00.499193 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:00.501895 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:00.995194 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:00.995216 56262 round_trippers.go:469] Request Headers:
I0505 14:22:00.995229 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:00.995237 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:00.998468 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:22:00.998920 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:00.998926 56262 round_trippers.go:469] Request Headers:
I0505 14:22:00.998932 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:00.998935 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:01.000600 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:01.494835 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:01.494860 56262 round_trippers.go:469] Request Headers:
I0505 14:22:01.494871 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:01.494877 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:01.497889 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:22:01.498547 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:01.498554 56262 round_trippers.go:469] Request Headers:
I0505 14:22:01.498558 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:01.498561 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:01.500447 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:01.500751 56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
I0505 14:22:01.996453 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:01.996472 56262 round_trippers.go:469] Request Headers:
I0505 14:22:01.996483 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:01.996490 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:01.999407 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:01.999918 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:01.999925 56262 round_trippers.go:469] Request Headers:
I0505 14:22:01.999931 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:01.999934 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:02.001706 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:02.495361 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:02.495382 56262 round_trippers.go:469] Request Headers:
I0505 14:22:02.495393 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:02.495400 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:02.498902 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:22:02.499504 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:02.499511 56262 round_trippers.go:469] Request Headers:
I0505 14:22:02.499517 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:02.499521 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:02.501049 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:02.995527 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:02.995548 56262 round_trippers.go:469] Request Headers:
I0505 14:22:02.995559 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:02.995565 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:02.998530 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:02.998981 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:02.998988 56262 round_trippers.go:469] Request Headers:
I0505 14:22:02.998994 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:02.998999 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:03.000798 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:03.495714 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:03.495730 56262 round_trippers.go:469] Request Headers:
I0505 14:22:03.495737 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:03.495741 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:03.498051 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:03.498563 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:03.498571 56262 round_trippers.go:469] Request Headers:
I0505 14:22:03.498576 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:03.498588 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:03.500374 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:03.995061 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:04.002434 56262 round_trippers.go:469] Request Headers:
I0505 14:22:04.002442 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:04.002447 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:04.004861 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:04.005402 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:04.005409 56262 round_trippers.go:469] Request Headers:
I0505 14:22:04.005415 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:04.005418 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:04.011753 56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0505 14:22:04.012403 56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
I0505 14:22:04.494873 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:04.494893 56262 round_trippers.go:469] Request Headers:
I0505 14:22:04.494902 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:04.494906 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:04.497460 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:04.497938 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:04.497946 56262 round_trippers.go:469] Request Headers:
I0505 14:22:04.497951 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:04.497960 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:04.499356 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:04.995159 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:04.995178 56262 round_trippers.go:469] Request Headers:
I0505 14:22:04.995188 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:04.995195 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:04.998687 56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0505 14:22:04.999335 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:04.999342 56262 round_trippers.go:469] Request Headers:
I0505 14:22:04.999348 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:04.999353 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.000905 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:05.494984 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
I0505 14:22:05.494997 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.495003 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.495007 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.497333 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:05.497727 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:05.497735 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.497741 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.497744 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.499501 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:05.500069 56262 pod_ready.go:92] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"True"
I0505 14:22:05.500079 56262 pod_ready.go:81] duration metric: took 11.005361676s for pod "kube-proxy-kppdj" in "kube-system" namespace to be "Ready" ...
I0505 14:22:05.500095 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zwgd2" in "kube-system" namespace to be "Ready" ...
I0505 14:22:05.500132 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zwgd2
I0505 14:22:05.500137 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.500142 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.500146 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.502320 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:05.502750 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
I0505 14:22:05.502757 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.502763 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.502767 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.504769 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:05.505126 56262 pod_ready.go:92] pod "kube-proxy-zwgd2" in "kube-system" namespace has status "Ready":"True"
I0505 14:22:05.505135 56262 pod_ready.go:81] duration metric: took 5.036025ms for pod "kube-proxy-zwgd2" in "kube-system" namespace to be "Ready" ...
I0505 14:22:05.505142 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000" in "kube-system" namespace to be "Ready" ...
I0505 14:22:05.505179 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000
I0505 14:22:05.505184 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.505189 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.505194 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.507083 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:05.507461 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
I0505 14:22:05.507468 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.507473 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.507477 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.509224 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:05.509709 56262 pod_ready.go:92] pod "kube-scheduler-ha-671000" in "kube-system" namespace has status "Ready":"True"
I0505 14:22:05.509724 56262 pod_ready.go:81] duration metric: took 4.57068ms for pod "kube-scheduler-ha-671000" in "kube-system" namespace to be "Ready" ...
I0505 14:22:05.509732 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
I0505 14:22:05.509767 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000-m02
I0505 14:22:05.509771 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.509777 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.509780 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.511597 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:05.511989 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
I0505 14:22:05.511996 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.512000 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.512010 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.514080 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:05.514548 56262 pod_ready.go:92] pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
I0505 14:22:05.514556 56262 pod_ready.go:81] duration metric: took 4.819427ms for pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
I0505 14:22:05.514563 56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
I0505 14:22:05.514599 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000-m03
I0505 14:22:05.514603 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.514609 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.514612 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.516436 56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
I0505 14:22:05.516907 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
I0505 14:22:05.516914 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.516919 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.516923 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.519043 56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0505 14:22:05.519280 56262 pod_ready.go:92] pod "kube-scheduler-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
I0505 14:22:05.519288 56262 pod_ready.go:81] duration metric: took 4.719804ms for pod "kube-scheduler-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
I0505 14:22:05.519294 56262 pod_ready.go:38] duration metric: took 28.365933714s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0505 14:22:05.519320 56262 api_server.go:52] waiting for apiserver process to appear ...
I0505 14:22:05.519375 56262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0505 14:22:05.533426 56262 api_server.go:72] duration metric: took 37.809561996s to wait for apiserver process to appear ...
I0505 14:22:05.533438 56262 api_server.go:88] waiting for apiserver healthz status ...
I0505 14:22:05.533454 56262 api_server.go:253] Checking apiserver healthz at https://192.169.0.51:8443/healthz ...
I0505 14:22:05.537141 56262 api_server.go:279] https://192.169.0.51:8443/healthz returned 200:
ok
I0505 14:22:05.537173 56262 round_trippers.go:463] GET https://192.169.0.51:8443/version
I0505 14:22:05.537183 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.537191 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.537195 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.537884 56262 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
I0505 14:22:05.538028 56262 api_server.go:141] control plane version: v1.30.0
I0505 14:22:05.538038 56262 api_server.go:131] duration metric: took 4.594882ms to wait for apiserver health ...
I0505 14:22:05.538049 56262 system_pods.go:43] waiting for kube-system pods to appear ...
I0505 14:22:05.696401 56262 request.go:629] Waited for 158.305976ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
I0505 14:22:05.696517 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
I0505 14:22:05.696529 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.696539 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.696547 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.703009 56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0505 14:22:05.708412 56262 system_pods.go:59] 26 kube-system pods found
I0505 14:22:05.708432 56262 system_pods.go:61] "coredns-7db6d8ff4d-hqtd2" [e76b43f2-8189-4e5d-adc3-ced554e9ee07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0505 14:22:05.708439 56262 system_pods.go:61] "coredns-7db6d8ff4d-kjf54" [c780145e-9d82-4451-94e8-dee09a63eadb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0505 14:22:05.708445 56262 system_pods.go:61] "etcd-ha-671000" [ea35bd5e-5a34-48e9-a9b7-4b200b88fb13] Running
I0505 14:22:05.708448 56262 system_pods.go:61] "etcd-ha-671000-m02" [15f721f6-9618-44f4-9160-dbf9f0a41f73] Running
I0505 14:22:05.708451 56262 system_pods.go:61] "etcd-ha-671000-m03" [67d2962f-d3f7-42d2-8334-cc42cb3ca5a5] Running
I0505 14:22:05.708458 56262 system_pods.go:61] "kindnet-cbt9x" [c35bdc79-4b12-4822-ae38-767c7d16c96a] Running
I0505 14:22:05.708462 56262 system_pods.go:61] "kindnet-ffg2p" [043d485d-6127-4de3-9e4f-cfbf554fa987] Running
I0505 14:22:05.708464 56262 system_pods.go:61] "kindnet-kn94d" [863e615e-f22a-4d15-8510-4f5c7a42b8cd] Running
I0505 14:22:05.708468 56262 system_pods.go:61] "kindnet-zvz9x" [17260177-9933-46e9-85d2-86fe51806c25] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0505 14:22:05.708471 56262 system_pods.go:61] "kube-apiserver-ha-671000" [a6f2585c-aec7-4ba9-aa78-e22c55a798ea] Running
I0505 14:22:05.708474 56262 system_pods.go:61] "kube-apiserver-ha-671000-m02" [bbb10014-5b6a-4377-8e62-a26e6925ee07] Running
I0505 14:22:05.708477 56262 system_pods.go:61] "kube-apiserver-ha-671000-m03" [83168cbb-d4d0-4793-9f59-1c8cd4c2616f] Running
I0505 14:22:05.708482 56262 system_pods.go:61] "kube-controller-manager-ha-671000" [9f4e9073-99da-4ed5-8b5f-72106d630807] Running
I0505 14:22:05.708487 56262 system_pods.go:61] "kube-controller-manager-ha-671000-m02" [074a2d58-b5a5-4fd7-8c03-c1a357ee0c4f] Running
I0505 14:22:05.708489 56262 system_pods.go:61] "kube-controller-manager-ha-671000-m03" [c7fa4cb4-20a0-431f-8f1a-fb9bf2f0d702] Running
I0505 14:22:05.708493 56262 system_pods.go:61] "kube-proxy-5jwqs" [72f1cbf9-ca3e-4354-a8f8-7239c77af74a] Running
I0505 14:22:05.708495 56262 system_pods.go:61] "kube-proxy-b45s6" [4d403d96-8102-44d7-a76f-3a64f30a7132] Running
I0505 14:22:05.708497 56262 system_pods.go:61] "kube-proxy-kppdj" [5b47d66e-31b1-4892-85ef-0c3ad3bec4cb] Running
I0505 14:22:05.708500 56262 system_pods.go:61] "kube-proxy-zwgd2" [e87cf8e2-923f-499e-a740-60cd8b02b805] Running
I0505 14:22:05.708502 56262 system_pods.go:61] "kube-scheduler-ha-671000" [1ae249c2-7cd6-4c14-80aa-1d88d491dfc2] Running
I0505 14:22:05.708505 56262 system_pods.go:61] "kube-scheduler-ha-671000-m02" [27dd9d5f-8e2a-4597-b743-0c79fa5df5b1] Running
I0505 14:22:05.708507 56262 system_pods.go:61] "kube-scheduler-ha-671000-m03" [0c85a6ad-ae3e-4895-8a7f-f36385b1eb0b] Running
I0505 14:22:05.708510 56262 system_pods.go:61] "kube-vip-ha-671000" [dcc7956d-0333-45ed-afff-b8429485ef9a] Running
I0505 14:22:05.708512 56262 system_pods.go:61] "kube-vip-ha-671000-m02" [9a09b965-61cb-4026-9bcd-0daa29f18c86] Running
I0505 14:22:05.708515 56262 system_pods.go:61] "kube-vip-ha-671000-m03" [4866dc28-b7e1-4387-8e4a-cf819f426faa] Running
I0505 14:22:05.708520 56262 system_pods.go:61] "storage-provisioner" [f376315c-5f9b-46f4-b295-6d7d025063bc] Running
I0505 14:22:05.708525 56262 system_pods.go:74] duration metric: took 170.469417ms to wait for pod list to return data ...
I0505 14:22:05.708531 56262 default_sa.go:34] waiting for default service account to be created ...
I0505 14:22:05.897069 56262 request.go:629] Waited for 188.474109ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/default/serviceaccounts
I0505 14:22:05.897179 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/default/serviceaccounts
I0505 14:22:05.897186 56262 round_trippers.go:469] Request Headers:
I0505 14:22:05.897194 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:05.897199 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:05.950188 56262 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
I0505 14:22:05.950392 56262 default_sa.go:45] found service account: "default"
I0505 14:22:05.950405 56262 default_sa.go:55] duration metric: took 241.864725ms for default service account to be created ...
I0505 14:22:05.950412 56262 system_pods.go:116] waiting for k8s-apps to be running ...
I0505 14:22:06.095263 56262 request.go:629] Waited for 144.804696ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
I0505 14:22:06.095366 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
I0505 14:22:06.095376 56262 round_trippers.go:469] Request Headers:
I0505 14:22:06.095388 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:06.095395 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:06.102144 56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0505 14:22:06.107768 56262 system_pods.go:86] 26 kube-system pods found
I0505 14:22:06.107783 56262 system_pods.go:89] "coredns-7db6d8ff4d-hqtd2" [e76b43f2-8189-4e5d-adc3-ced554e9ee07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0505 14:22:06.107794 56262 system_pods.go:89] "coredns-7db6d8ff4d-kjf54" [c780145e-9d82-4451-94e8-dee09a63eadb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0505 14:22:06.107800 56262 system_pods.go:89] "etcd-ha-671000" [ea35bd5e-5a34-48e9-a9b7-4b200b88fb13] Running
I0505 14:22:06.107803 56262 system_pods.go:89] "etcd-ha-671000-m02" [15f721f6-9618-44f4-9160-dbf9f0a41f73] Running
I0505 14:22:06.107808 56262 system_pods.go:89] "etcd-ha-671000-m03" [67d2962f-d3f7-42d2-8334-cc42cb3ca5a5] Running
I0505 14:22:06.107811 56262 system_pods.go:89] "kindnet-cbt9x" [c35bdc79-4b12-4822-ae38-767c7d16c96a] Running
I0505 14:22:06.107815 56262 system_pods.go:89] "kindnet-ffg2p" [043d485d-6127-4de3-9e4f-cfbf554fa987] Running
I0505 14:22:06.107818 56262 system_pods.go:89] "kindnet-kn94d" [863e615e-f22a-4d15-8510-4f5c7a42b8cd] Running
I0505 14:22:06.107823 56262 system_pods.go:89] "kindnet-zvz9x" [17260177-9933-46e9-85d2-86fe51806c25] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I0505 14:22:06.107826 56262 system_pods.go:89] "kube-apiserver-ha-671000" [a6f2585c-aec7-4ba9-aa78-e22c55a798ea] Running
I0505 14:22:06.107831 56262 system_pods.go:89] "kube-apiserver-ha-671000-m02" [bbb10014-5b6a-4377-8e62-a26e6925ee07] Running
I0505 14:22:06.107834 56262 system_pods.go:89] "kube-apiserver-ha-671000-m03" [83168cbb-d4d0-4793-9f59-1c8cd4c2616f] Running
I0505 14:22:06.107838 56262 system_pods.go:89] "kube-controller-manager-ha-671000" [9f4e9073-99da-4ed5-8b5f-72106d630807] Running
I0505 14:22:06.107842 56262 system_pods.go:89] "kube-controller-manager-ha-671000-m02" [074a2d58-b5a5-4fd7-8c03-c1a357ee0c4f] Running
I0505 14:22:06.107847 56262 system_pods.go:89] "kube-controller-manager-ha-671000-m03" [c7fa4cb4-20a0-431f-8f1a-fb9bf2f0d702] Running
I0505 14:22:06.107854 56262 system_pods.go:89] "kube-proxy-5jwqs" [72f1cbf9-ca3e-4354-a8f8-7239c77af74a] Running
I0505 14:22:06.107862 56262 system_pods.go:89] "kube-proxy-b45s6" [4d403d96-8102-44d7-a76f-3a64f30a7132] Running
I0505 14:22:06.107866 56262 system_pods.go:89] "kube-proxy-kppdj" [5b47d66e-31b1-4892-85ef-0c3ad3bec4cb] Running
I0505 14:22:06.107869 56262 system_pods.go:89] "kube-proxy-zwgd2" [e87cf8e2-923f-499e-a740-60cd8b02b805] Running
I0505 14:22:06.107874 56262 system_pods.go:89] "kube-scheduler-ha-671000" [1ae249c2-7cd6-4c14-80aa-1d88d491dfc2] Running
I0505 14:22:06.107877 56262 system_pods.go:89] "kube-scheduler-ha-671000-m02" [27dd9d5f-8e2a-4597-b743-0c79fa5df5b1] Running
I0505 14:22:06.107887 56262 system_pods.go:89] "kube-scheduler-ha-671000-m03" [0c85a6ad-ae3e-4895-8a7f-f36385b1eb0b] Running
I0505 14:22:06.107890 56262 system_pods.go:89] "kube-vip-ha-671000" [dcc7956d-0333-45ed-afff-b8429485ef9a] Running
I0505 14:22:06.107894 56262 system_pods.go:89] "kube-vip-ha-671000-m02" [9a09b965-61cb-4026-9bcd-0daa29f18c86] Running
I0505 14:22:06.107897 56262 system_pods.go:89] "kube-vip-ha-671000-m03" [4866dc28-b7e1-4387-8e4a-cf819f426faa] Running
I0505 14:22:06.107900 56262 system_pods.go:89] "storage-provisioner" [f376315c-5f9b-46f4-b295-6d7d025063bc] Running
I0505 14:22:06.107905 56262 system_pods.go:126] duration metric: took 157.48572ms to wait for k8s-apps to be running ...
I0505 14:22:06.107910 56262 system_svc.go:44] waiting for kubelet service to be running ....
I0505 14:22:06.107954 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0505 14:22:06.119916 56262 system_svc.go:56] duration metric: took 12.002036ms WaitForService to wait for kubelet
I0505 14:22:06.119930 56262 kubeadm.go:576] duration metric: took 38.396059047s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0505 14:22:06.119941 56262 node_conditions.go:102] verifying NodePressure condition ...
I0505 14:22:06.295252 56262 request.go:629] Waited for 175.271788ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes
I0505 14:22:06.295332 56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes
I0505 14:22:06.295338 56262 round_trippers.go:469] Request Headers:
I0505 14:22:06.295345 56262 round_trippers.go:473] Accept: application/json, */*
I0505 14:22:06.295350 56262 round_trippers.go:473] User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
I0505 14:22:06.299820 56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0505 14:22:06.300760 56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0505 14:22:06.300774 56262 node_conditions.go:123] node cpu capacity is 2
I0505 14:22:06.300783 56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0505 14:22:06.300787 56262 node_conditions.go:123] node cpu capacity is 2
I0505 14:22:06.300791 56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0505 14:22:06.300794 56262 node_conditions.go:123] node cpu capacity is 2
I0505 14:22:06.300797 56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0505 14:22:06.300801 56262 node_conditions.go:123] node cpu capacity is 2
I0505 14:22:06.300804 56262 node_conditions.go:105] duration metric: took 180.85639ms to run NodePressure ...
I0505 14:22:06.300811 56262 start.go:240] waiting for startup goroutines ...
I0505 14:22:06.300829 56262 start.go:254] writing updated cluster config ...
I0505 14:22:06.322636 56262 out.go:177]
I0505 14:22:06.343913 56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:22:06.344042 56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
I0505 14:22:06.366539 56262 out.go:177] * Starting "ha-671000-m03" control-plane node in "ha-671000" cluster
I0505 14:22:06.408466 56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0505 14:22:06.408493 56262 cache.go:56] Caching tarball of preloaded images
I0505 14:22:06.408686 56262 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0505 14:22:06.408703 56262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0505 14:22:06.408834 56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
I0505 14:22:06.409908 56262 start.go:360] acquireMachinesLock for ha-671000-m03: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0505 14:22:06.409993 56262 start.go:364] duration metric: took 67.566µs to acquireMachinesLock for "ha-671000-m03"
I0505 14:22:06.410011 56262 start.go:96] Skipping create...Using existing machine configuration
I0505 14:22:06.410016 56262 fix.go:54] fixHost starting: m03
I0505 14:22:06.410315 56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:22:06.410333 56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:22:06.419592 56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57925
I0505 14:22:06.419993 56262 main.go:141] libmachine: () Calling .GetVersion
I0505 14:22:06.420359 56262 main.go:141] libmachine: Using API Version 1
I0505 14:22:06.420375 56262 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:22:06.420588 56262 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:22:06.420701 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
I0505 14:22:06.420780 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetState
I0505 14:22:06.420862 56262 main.go:141] libmachine: (ha-671000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:22:06.420955 56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid from json: 55740
I0505 14:22:06.421873 56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid 55740 missing from process table
I0505 14:22:06.421938 56262 fix.go:112] recreateIfNeeded on ha-671000-m03: state=Stopped err=<nil>
I0505 14:22:06.421958 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
W0505 14:22:06.422054 56262 fix.go:138] unexpected machine state, will restart: <nil>
I0505 14:22:06.443498 56262 out.go:177] * Restarting existing hyperkit VM for "ha-671000-m03" ...
I0505 14:22:06.485588 56262 main.go:141] libmachine: (ha-671000-m03) Calling .Start
I0505 14:22:06.485823 56262 main.go:141] libmachine: (ha-671000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:22:06.485876 56262 main.go:141] libmachine: (ha-671000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid
I0505 14:22:06.487603 56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid 55740 missing from process table
I0505 14:22:06.487617 56262 main.go:141] libmachine: (ha-671000-m03) DBG | pid 55740 is in state "Stopped"
I0505 14:22:06.487633 56262 main.go:141] libmachine: (ha-671000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid...
I0505 14:22:06.488242 56262 main.go:141] libmachine: (ha-671000-m03) DBG | Using UUID be90591f-7869-4905-ae38-2f481381ca7c
I0505 14:22:06.514163 56262 main.go:141] libmachine: (ha-671000-m03) DBG | Generated MAC ce:17:a:56:1e:f8
I0505 14:22:06.514197 56262 main.go:141] libmachine: (ha-671000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
I0505 14:22:06.514318 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"be90591f-7869-4905-ae38-2f481381ca7c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003be9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0505 14:22:06.514365 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"be90591f-7869-4905-ae38-2f481381ca7c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003be9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
I0505 14:22:06.514413 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "be90591f-7869-4905-ae38-2f481381ca7c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/ha-671000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/
machines/ha-671000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
I0505 14:22:06.514460 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U be90591f-7869-4905-ae38-2f481381ca7c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/ha-671000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
I0505 14:22:06.514470 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0505 14:22:06.515957 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Pid is 56300
I0505 14:22:06.516349 56262 main.go:141] libmachine: (ha-671000-m03) DBG | Attempt 0
I0505 14:22:06.516370 56262 main.go:141] libmachine: (ha-671000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:22:06.516444 56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid from json: 56300
I0505 14:22:06.518246 56262 main.go:141] libmachine: (ha-671000-m03) DBG | Searching for ce:17:a:56:1e:f8 in /var/db/dhcpd_leases ...
I0505 14:22:06.518360 56262 main.go:141] libmachine: (ha-671000-m03) DBG | Found 53 entries in /var/db/dhcpd_leases!
I0505 14:22:06.518376 56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x663949ce}
I0505 14:22:06.518417 56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x663949ba}
I0505 14:22:06.518433 56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x6637f817}
I0505 14:22:06.518449 56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x663948d2}
I0505 14:22:06.518457 56262 main.go:141] libmachine: (ha-671000-m03) DBG | Found match: ce:17:a:56:1e:f8
I0505 14:22:06.518467 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetConfigRaw
I0505 14:22:06.518473 56262 main.go:141] libmachine: (ha-671000-m03) DBG | IP: 192.169.0.53
I0505 14:22:06.519132 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetIP
I0505 14:22:06.519357 56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
I0505 14:22:06.519808 56262 machine.go:94] provisionDockerMachine start ...
I0505 14:22:06.519818 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
I0505 14:22:06.519942 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:06.520079 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:06.520182 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:06.520284 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:06.520381 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:06.520488 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:22:06.520648 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.53 22 <nil> <nil>}
I0505 14:22:06.520655 56262 main.go:141] libmachine: About to run SSH command:
hostname
I0505 14:22:06.524407 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0505 14:22:06.532556 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0505 14:22:06.533607 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0505 14:22:06.533622 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0505 14:22:06.533633 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0505 14:22:06.533644 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0505 14:22:06.917916 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0505 14:22:06.917942 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0505 14:22:07.032632 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0505 14:22:07.032653 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0505 14:22:07.032677 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0505 14:22:07.032689 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0505 14:22:07.033533 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0505 14:22:07.033546 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0505 14:22:12.402771 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0505 14:22:12.402786 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0505 14:22:12.402806 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0505 14:22:12.426606 56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
I0505 14:22:41.581350 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0505 14:22:41.581367 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetMachineName
I0505 14:22:41.581506 56262 buildroot.go:166] provisioning hostname "ha-671000-m03"
I0505 14:22:41.581517 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetMachineName
I0505 14:22:41.581600 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:41.581683 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:41.581781 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.581875 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.581960 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:41.582100 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:22:41.582238 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.53 22 <nil> <nil>}
I0505 14:22:41.582247 56262 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-671000-m03 && echo "ha-671000-m03" | sudo tee /etc/hostname
I0505 14:22:41.647083 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000-m03
I0505 14:22:41.647098 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:41.647232 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:41.647343 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.647430 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.647521 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:41.647657 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:22:41.647849 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.53 22 <nil> <nil>}
I0505 14:22:41.647862 56262 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sha-671000-m03' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000-m03/g' /etc/hosts;
else
echo '127.0.1.1 ha-671000-m03' | sudo tee -a /etc/hosts;
fi
fi
I0505 14:22:41.709306 56262 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0505 14:22:41.709326 56262 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
I0505 14:22:41.709344 56262 buildroot.go:174] setting up certificates
I0505 14:22:41.709357 56262 provision.go:84] configureAuth start
I0505 14:22:41.709363 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetMachineName
I0505 14:22:41.709499 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetIP
I0505 14:22:41.709593 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:41.709680 56262 provision.go:143] copyHostCerts
I0505 14:22:41.709715 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
I0505 14:22:41.709786 56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
I0505 14:22:41.709792 56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
I0505 14:22:41.709937 56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
I0505 14:22:41.710168 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
I0505 14:22:41.710212 56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
I0505 14:22:41.710217 56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
I0505 14:22:41.710297 56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
I0505 14:22:41.710445 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
I0505 14:22:41.710490 56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
I0505 14:22:41.710497 56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
I0505 14:22:41.710575 56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
I0505 14:22:41.710718 56262 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000-m03 san=[127.0.0.1 192.169.0.53 ha-671000-m03 localhost minikube]
I0505 14:22:41.753782 56262 provision.go:177] copyRemoteCerts
I0505 14:22:41.753842 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0505 14:22:41.753857 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:41.753999 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:41.754106 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.754195 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:41.754274 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
I0505 14:22:41.788993 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0505 14:22:41.789066 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0505 14:22:41.808008 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
I0505 14:22:41.808084 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0505 14:22:41.828147 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0505 14:22:41.828228 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0505 14:22:41.848543 56262 provision.go:87] duration metric: took 139.178952ms to configureAuth
I0505 14:22:41.848558 56262 buildroot.go:189] setting minikube options for container-runtime
I0505 14:22:41.848732 56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:22:41.848746 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
I0505 14:22:41.848890 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:41.848974 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:41.849066 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.849145 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.849226 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:41.849346 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:22:41.849468 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.53 22 <nil> <nil>}
I0505 14:22:41.849476 56262 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0505 14:22:41.905134 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0505 14:22:41.905147 56262 buildroot.go:70] root file system type: tmpfs
I0505 14:22:41.905226 56262 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0505 14:22:41.905236 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:41.905372 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:41.905459 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.905559 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.905645 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:41.905773 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:22:41.905913 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.53 22 <nil> <nil>}
I0505 14:22:41.905965 56262 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment="NO_PROXY=192.169.0.51"
Environment="NO_PROXY=192.169.0.51,192.169.0.52"
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0505 14:22:41.971506 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
Environment=NO_PROXY=192.169.0.51
Environment=NO_PROXY=192.169.0.51,192.169.0.52
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0505 14:22:41.971532 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:41.971667 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:41.971753 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.971832 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:41.971919 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:41.972061 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:22:41.972206 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.53 22 <nil> <nil>}
I0505 14:22:41.972218 56262 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0505 14:22:43.586757 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0505 14:22:43.586772 56262 machine.go:97] duration metric: took 37.066967123s to provisionDockerMachine
I0505 14:22:43.586795 56262 start.go:293] postStartSetup for "ha-671000-m03" (driver="hyperkit")
I0505 14:22:43.586804 56262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0505 14:22:43.586816 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
I0505 14:22:43.587008 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0505 14:22:43.587022 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:43.587109 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:43.587250 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:43.587368 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:43.587470 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
I0505 14:22:43.621728 56262 ssh_runner.go:195] Run: cat /etc/os-release
I0505 14:22:43.624913 56262 info.go:137] Remote host: Buildroot 2023.02.9
I0505 14:22:43.624927 56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
I0505 14:22:43.625027 56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
I0505 14:22:43.625208 56262 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
I0505 14:22:43.625215 56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
I0505 14:22:43.625422 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0505 14:22:43.632883 56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
I0505 14:22:43.652930 56262 start.go:296] duration metric: took 66.125789ms for postStartSetup
I0505 14:22:43.652964 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
I0505 14:22:43.653131 56262 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0505 14:22:43.653145 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:43.653240 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:43.653328 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:43.653413 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:43.653505 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
I0505 14:22:43.687474 56262 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0505 14:22:43.687532 56262 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0505 14:22:43.719424 56262 fix.go:56] duration metric: took 37.309414657s for fixHost
I0505 14:22:43.719447 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:43.719581 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:43.719680 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:43.719767 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:43.719859 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:43.719991 56262 main.go:141] libmachine: Using SSH client type: native
I0505 14:22:43.720140 56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil> [] 0s} 192.169.0.53 22 <nil> <nil>}
I0505 14:22:43.720147 56262 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0505 14:22:43.777003 56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944163.917671963
I0505 14:22:43.777016 56262 fix.go:216] guest clock: 1714944163.917671963
I0505 14:22:43.777022 56262 fix.go:229] Guest: 2024-05-05 14:22:43.917671963 -0700 PDT Remote: 2024-05-05 14:22:43.719438 -0700 PDT m=+114.784889102 (delta=198.233963ms)
I0505 14:22:43.777033 56262 fix.go:200] guest clock delta is within tolerance: 198.233963ms
I0505 14:22:43.777036 56262 start.go:83] releasing machines lock for "ha-671000-m03", held for 37.367046714s
I0505 14:22:43.777054 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
I0505 14:22:43.777184 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetIP
I0505 14:22:43.798458 56262 out.go:177] * Found network options:
I0505 14:22:43.818375 56262 out.go:177] - NO_PROXY=192.169.0.51,192.169.0.52
W0505 14:22:43.839196 56262 proxy.go:119] fail to check proxy env: Error ip not in block
W0505 14:22:43.839212 56262 proxy.go:119] fail to check proxy env: Error ip not in block
I0505 14:22:43.839223 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
I0505 14:22:43.839636 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
I0505 14:22:43.839763 56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
I0505 14:22:43.839847 56262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0505 14:22:43.839883 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
W0505 14:22:43.839885 56262 proxy.go:119] fail to check proxy env: Error ip not in block
W0505 14:22:43.839898 56262 proxy.go:119] fail to check proxy env: Error ip not in block
I0505 14:22:43.839953 56262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0505 14:22:43.839970 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
I0505 14:22:43.839989 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:43.840065 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
I0505 14:22:43.840123 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:43.840188 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
I0505 14:22:43.840221 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:43.840303 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
I0505 14:22:43.840332 56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
I0505 14:22:43.840420 56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
W0505 14:22:43.919168 56262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0505 14:22:43.919245 56262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0505 14:22:43.936501 56262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0505 14:22:43.936515 56262 start.go:494] detecting cgroup driver to use...
I0505 14:22:43.936582 56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0505 14:22:43.953774 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0505 14:22:43.963068 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0505 14:22:43.972111 56262 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0505 14:22:43.972163 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0505 14:22:43.981147 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0505 14:22:44.011701 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0505 14:22:44.020897 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0505 14:22:44.030143 56262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0505 14:22:44.039491 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0505 14:22:44.048778 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0505 14:22:44.057937 56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0505 14:22:44.067298 56262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0505 14:22:44.075698 56262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0505 14:22:44.083983 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:22:44.200980 56262 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0505 14:22:44.219877 56262 start.go:494] detecting cgroup driver to use...
I0505 14:22:44.219946 56262 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0505 14:22:44.236639 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0505 14:22:44.254367 56262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0505 14:22:44.271268 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0505 14:22:44.282915 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0505 14:22:44.293466 56262 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0505 14:22:44.317181 56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0505 14:22:44.327878 56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0505 14:22:44.343024 56262 ssh_runner.go:195] Run: which cri-dockerd
I0505 14:22:44.346054 56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0505 14:22:44.353257 56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0505 14:22:44.367082 56262 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0505 14:22:44.465180 56262 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0505 14:22:44.569600 56262 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0505 14:22:44.569629 56262 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0505 14:22:44.584431 56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0505 14:22:44.680947 56262 ssh_runner.go:195] Run: sudo systemctl restart docker
I0505 14:23:45.736510 56262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.056089884s)
I0505 14:23:45.736595 56262 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0505 14:23:45.770790 56262 out.go:177]
W0505 14:23:45.791249 56262 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
sudo journalctl --no-pager -u docker:
-- stdout --
May 05 21:22:41 ha-671000-m03 systemd[1]: Starting Docker Application Container Engine...
May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.352208248Z" level=info msg="Starting up"
May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.353022730Z" level=info msg="containerd not running, starting managed containerd"
May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.358767057Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.373539189Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388000547Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388073973Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388137944Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388171760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388313706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388355785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388477111Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388518957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388551610Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388580389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388726935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388950191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390520791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390570725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390706880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390751886Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390888815Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390940476Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390972496Z" level=info msg="metadata content store policy set" policy=shared
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394800432Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394883868Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394961138Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395000278Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395036706Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395111009Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395337703Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395418767Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395454129Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395484232Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395514263Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395546554Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395576938Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395607440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395641518Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395677040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395708605Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395737963Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395799761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395843188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395874408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395904381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395933636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395965927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395995431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396033716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396067448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396098841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396127871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396155969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396184510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396215668Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396250321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396280045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396307939Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396379697Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396424577Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396475305Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396510849Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396569471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396621386Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396656010Z" level=info msg="NRI interface is disabled by configuration."
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396883316Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396972499Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.397031244Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.397069101Z" level=info msg="containerd successfully booted in 0.024677s"
May 05 21:22:42 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:42.379929944Z" level=info msg="[graphdriver] trying configured driver: overlay2"
May 05 21:22:42 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:42.413119848Z" level=info msg="Loading containers: start."
May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.663705690Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.700545709Z" level=info msg="Loading containers: done."
May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.707501270Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.707669278Z" level=info msg="Daemon has completed initialization"
May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.725886686Z" level=info msg="API listen on [::]:2376"
May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.725971765Z" level=info msg="API listen on /var/run/docker.sock"
May 05 21:22:43 ha-671000-m03 systemd[1]: Started Docker Application Container Engine.
May 05 21:22:44 ha-671000-m03 systemd[1]: Stopping Docker Application Container Engine...
May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.833114404Z" level=info msg="Processing signal 'terminated'"
May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834199869Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834666188Z" level=info msg="Daemon shutdown complete"
May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834695637Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834707874Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
May 05 21:22:45 ha-671000-m03 systemd[1]: docker.service: Deactivated successfully.
May 05 21:22:45 ha-671000-m03 systemd[1]: Stopped Docker Application Container Engine.
May 05 21:22:45 ha-671000-m03 systemd[1]: Starting Docker Application Container Engine...
May 05 21:22:45 ha-671000-m03 dockerd[1073]: time="2024-05-05T21:22:45.887265470Z" level=info msg="Starting up"
May 05 21:23:45 ha-671000-m03 dockerd[1073]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
May 05 21:23:45 ha-671000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
May 05 21:23:45 ha-671000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
May 05 21:23:45 ha-671000-m03 systemd[1]: Failed to start Docker Application Container Engine.
-- /stdout --
W0505 14:23:45.791332 56262 out.go:239] *
W0505 14:23:45.791963 56262 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0505 14:23:45.854203 56262 out.go:177]
==> Docker <==
May 05 21:22:04 ha-671000 dockerd[1136]: time="2024-05-05T21:22:04.237377141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 05 21:22:05 ha-671000 dockerd[1136]: time="2024-05-05T21:22:05.263750494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 05 21:22:05 ha-671000 dockerd[1136]: time="2024-05-05T21:22:05.263806421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 05 21:22:05 ha-671000 dockerd[1136]: time="2024-05-05T21:22:05.263818283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 05 21:22:05 ha-671000 dockerd[1136]: time="2024-05-05T21:22:05.263888173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 05 21:22:05 ha-671000 dockerd[1136]: time="2024-05-05T21:22:05.265011165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 05 21:22:05 ha-671000 dockerd[1136]: time="2024-05-05T21:22:05.265198272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 05 21:22:05 ha-671000 dockerd[1136]: time="2024-05-05T21:22:05.265235383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 05 21:22:05 ha-671000 dockerd[1136]: time="2024-05-05T21:22:05.265331468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 05 21:22:09 ha-671000 dockerd[1136]: time="2024-05-05T21:22:09.280534299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 05 21:22:09 ha-671000 dockerd[1136]: time="2024-05-05T21:22:09.280666251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 05 21:22:09 ha-671000 dockerd[1136]: time="2024-05-05T21:22:09.280681083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 05 21:22:09 ha-671000 dockerd[1136]: time="2024-05-05T21:22:09.284884558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 05 21:22:34 ha-671000 dockerd[1136]: time="2024-05-05T21:22:34.248610291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 05 21:22:34 ha-671000 dockerd[1136]: time="2024-05-05T21:22:34.248876754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 05 21:22:34 ha-671000 dockerd[1136]: time="2024-05-05T21:22:34.248900713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 05 21:22:34 ha-671000 dockerd[1136]: time="2024-05-05T21:22:34.249023707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 05 21:22:34 ha-671000 dockerd[1130]: time="2024-05-05T21:22:34.316945093Z" level=info msg="ignoring event" container=0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 05 21:22:34 ha-671000 dockerd[1136]: time="2024-05-05T21:22:34.317591194Z" level=info msg="shim disconnected" id=0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377 namespace=moby
May 05 21:22:34 ha-671000 dockerd[1136]: time="2024-05-05T21:22:34.317738677Z" level=warning msg="cleaning up after shim disconnected" id=0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377 namespace=moby
May 05 21:22:34 ha-671000 dockerd[1136]: time="2024-05-05T21:22:34.317783286Z" level=info msg="cleaning up dead shim" namespace=moby
May 05 21:22:36 ha-671000 dockerd[1136]: time="2024-05-05T21:22:36.235098682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
May 05 21:22:36 ha-671000 dockerd[1136]: time="2024-05-05T21:22:36.235605348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
May 05 21:22:36 ha-671000 dockerd[1136]: time="2024-05-05T21:22:36.235714710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
May 05 21:22:36 ha-671000 dockerd[1136]: time="2024-05-05T21:22:36.235995155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
4e72d733bb177 cbb01a7bd410d About a minute ago Running coredns 1 17013aecf8e89 coredns-7db6d8ff4d-hqtd2
a5ba9a7a24b6f cbb01a7bd410d About a minute ago Running coredns 1 5a876c8ef945c coredns-7db6d8ff4d-kjf54
c048dc81e6392 4950bb10b3f87 About a minute ago Running kindnet-cni 1 382155dbcfe93 kindnet-zvz9x
76503e51b3afa 8c811b4aec35f About a minute ago Running busybox 1 8637a9efa2c11 busybox-fc5497c4f-lfn9v
7001a9c78d0af a0bf559e280cf About a minute ago Running kube-proxy 1 f930d07fb2b00 kube-proxy-kppdj
0883553982a24 6e38f40d628db About a minute ago Exited storage-provisioner 1 cca445b0e122c storage-provisioner
64c952108db1f c7aad43836fa5 About a minute ago Running kube-controller-manager 2 66419f8520fde kube-controller-manager-ha-671000
0faa6b8c33ebd c42f13656d0b2 2 minutes ago Running kube-apiserver 1 70fab261c2b17 kube-apiserver-ha-671000
0c29a1524fb04 22aaebb38f4a9 2 minutes ago Running kube-vip 0 2c44ab6fb1b45 kube-vip-ha-671000
d51ddba3901bd c7aad43836fa5 2 minutes ago Exited kube-controller-manager 1 66419f8520fde kube-controller-manager-ha-671000
06468c7f97645 3861cfcd7c04c 2 minutes ago Running etcd 1 7eb485f57bef9 etcd-ha-671000
09b069cddaf09 259c8277fcbbc 2 minutes ago Running kube-scheduler 1 0b3f9b67d960c kube-scheduler-ha-671000
d08c19fcd330c gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12 5 minutes ago Exited busybox 0 0a3a1177976eb busybox-fc5497c4f-lfn9v
aa3ff28b7c901 cbb01a7bd410d 7 minutes ago Exited coredns 0 803b42dbd6068 coredns-7db6d8ff4d-kjf54
bfe23d4afc231 cbb01a7bd410d 7 minutes ago Exited coredns 0 26bf6869329a0 coredns-7db6d8ff4d-hqtd2
1a1434eaae36d kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988 8 minutes ago Exited kindnet-cni 0 61be6d7331d2d kindnet-zvz9x
2de2ad908033c a0bf559e280cf 8 minutes ago Exited kube-proxy 0 ce98653ecf0b5 kube-proxy-kppdj
5254e6584697c 3861cfcd7c04c 8 minutes ago Exited etcd 0 6c18606ff8a34 etcd-ha-671000
52585f49ef66d c42f13656d0b2 8 minutes ago Exited kube-apiserver 0 157e6496c96d6 kube-apiserver-ha-671000
0f13fc419c3a3 259c8277fcbbc 8 minutes ago Exited kube-scheduler 0 20d7fc1ca35c2 kube-scheduler-ha-671000
==> coredns [4e72d733bb17] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:60404 - 16395 "HINFO IN 7673949606304789129.6924752665992071371. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01220844s
==> coredns [a5ba9a7a24b6] <==
.:53
[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
[INFO] 127.0.0.1:54698 - 36003 "HINFO IN 1073736587953336830.7574535335510144074. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015279179s
==> coredns [aa3ff28b7c90] <==
[INFO] 10.244.0.4:55179 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00060962s
[INFO] 10.244.0.4:54761 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000032941s
[INFO] 10.244.0.4:53596 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000034902s
[INFO] 10.244.1.2:52057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008017s
[INFO] 10.244.1.2:37246 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000039116s
[INFO] 10.244.1.2:41412 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078072s
[INFO] 10.244.1.2:35969 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000042719s
[INFO] 10.244.1.2:60012 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000495345s
[INFO] 10.244.1.2:57444 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000068087s
[INFO] 10.244.1.2:56681 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071523s
[INFO] 10.244.1.2:51095 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000038807s
[INFO] 10.244.2.2:39666 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012061s
[INFO] 10.244.0.4:36229 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075354s
[INFO] 10.244.0.4:36052 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000059981s
[INFO] 10.244.0.4:45966 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005648s
[INFO] 10.244.0.4:40793 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010383s
[INFO] 10.244.1.2:39020 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075539s
[INFO] 10.244.1.2:57719 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064383s
[INFO] 10.244.2.2:46470 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097542s
[INFO] 10.244.2.2:54394 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123552s
[INFO] 10.244.2.2:60319 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000056346s
[INFO] 10.244.1.2:32801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087202s
[INFO] 10.244.1.2:39594 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089023s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [bfe23d4afc23] <==
[INFO] 10.244.2.2:60822 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.010749854s
[INFO] 10.244.0.4:46715 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116633s
[INFO] 10.244.0.4:36578 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000057682s
[INFO] 10.244.2.2:49239 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011646073s
[INFO] 10.244.2.2:60414 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097s
[INFO] 10.244.2.2:33426 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011533001s
[INFO] 10.244.2.2:51459 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091142s
[INFO] 10.244.0.4:52044 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000037728s
[INFO] 10.244.0.4:58536 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000026924s
[INFO] 10.244.0.4:60528 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000030891s
[INFO] 10.244.0.4:46083 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057358s
[INFO] 10.244.2.2:35752 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076258s
[INFO] 10.244.2.2:52942 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063141s
[INFO] 10.244.2.2:37055 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096791s
[INFO] 10.244.1.2:52668 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008334s
[INFO] 10.244.1.2:39089 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160813s
[INFO] 10.244.2.2:59653 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000092778s
[INFO] 10.244.0.4:35085 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007747s
[INFO] 10.244.0.4:32964 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000073391s
[INFO] 10.244.0.4:44760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077879s
[INFO] 10.244.0.4:37758 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000071268s
[INFO] 10.244.1.2:55625 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000061815s
[INFO] 10.244.1.2:50908 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000064514s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> describe nodes <==
Name: ha-671000
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ha-671000
kubernetes.io/os=linux
minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
minikube.k8s.io/name=ha-671000
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_05_05T14_15_29_0700
minikube.k8s.io/version=v1.33.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 05 May 2024 21:15:24 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ha-671000
AcquireTime: <unset>
RenewTime: Sun, 05 May 2024 21:23:39 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sun, 05 May 2024 21:21:46 +0000 Sun, 05 May 2024 21:15:23 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 05 May 2024 21:21:46 +0000 Sun, 05 May 2024 21:15:23 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 05 May 2024 21:21:46 +0000 Sun, 05 May 2024 21:15:23 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 05 May 2024 21:21:46 +0000 Sun, 05 May 2024 21:15:49 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.169.0.51
Hostname: ha-671000
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164336Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164336Ki
pods: 110
System Info:
Machine ID: 3721a595f38c41b8bbd3cdb36f05098b
System UUID: 93894e2d-0000-0000-8cc9-aa1a138ddf96
Boot ID: 844f38c6-034c-4659-bd02-e667c7e866d4
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://26.0.2
Kubelet Version: v1.30.0
Kube-Proxy Version: v1.30.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-fc5497c4f-lfn9v 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m38s
kube-system coredns-7db6d8ff4d-hqtd2 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 8m7s
kube-system coredns-7db6d8ff4d-kjf54 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 8m7s
kube-system etcd-ha-671000 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 8m22s
kube-system kindnet-zvz9x 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 8m7s
kube-system kube-apiserver-ha-671000 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m20s
kube-system kube-controller-manager-ha-671000 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m20s
kube-system kube-proxy-kppdj 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m7s
kube-system kube-scheduler-ha-671000 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m20s
kube-system kube-vip-ha-671000 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 117s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m7s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%!)(MISSING) 100m (5%!)(MISSING)
memory 290Mi (13%!)(MISSING) 390Mi (18%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 102s kube-proxy
Normal Starting 8m5s kube-proxy
Normal NodeHasSufficientMemory 8m27s (x8 over 8m27s) kubelet Node ha-671000 status is now: NodeHasSufficientMemory
Normal NodeAllocatableEnforced 8m27s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientPID 8m27s (x7 over 8m27s) kubelet Node ha-671000 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 8m27s (x8 over 8m27s) kubelet Node ha-671000 status is now: NodeHasNoDiskPressure
Normal Starting 8m27s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 8m20s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientPID 8m20s kubelet Node ha-671000 status is now: NodeHasSufficientPID
Normal Starting 8m20s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m20s kubelet Node ha-671000 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m20s kubelet Node ha-671000 status is now: NodeHasNoDiskPressure
Normal RegisteredNode 8m8s node-controller Node ha-671000 event: Registered Node ha-671000 in Controller
Normal NodeReady 7m58s kubelet Node ha-671000 status is now: NodeReady
Normal RegisteredNode 6m54s node-controller Node ha-671000 event: Registered Node ha-671000 in Controller
Normal RegisteredNode 5m44s node-controller Node ha-671000 event: Registered Node ha-671000 in Controller
Normal RegisteredNode 3m29s node-controller Node ha-671000 event: Registered Node ha-671000 in Controller
Normal Starting 2m38s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m38s (x8 over 2m38s) kubelet Node ha-671000 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m38s (x8 over 2m38s) kubelet Node ha-671000 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m38s (x7 over 2m38s) kubelet Node ha-671000 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m38s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 118s node-controller Node ha-671000 event: Registered Node ha-671000 in Controller
Normal RegisteredNode 108s node-controller Node ha-671000 event: Registered Node ha-671000 in Controller
Name: ha-671000-m02
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ha-671000-m02
kubernetes.io/os=linux
minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
minikube.k8s.io/name=ha-671000
minikube.k8s.io/primary=false
minikube.k8s.io/updated_at=2024_05_05T14_16_38_0700
minikube.k8s.io/version=v1.33.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 05 May 2024 21:16:36 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: ha-671000-m02
AcquireTime: <unset>
RenewTime: Sun, 05 May 2024 21:23:41 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sun, 05 May 2024 21:21:38 +0000 Sun, 05 May 2024 21:16:36 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 05 May 2024 21:21:38 +0000 Sun, 05 May 2024 21:16:36 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 05 May 2024 21:21:38 +0000 Sun, 05 May 2024 21:16:36 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 05 May 2024 21:21:38 +0000 Sun, 05 May 2024 21:16:45 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.169.0.52
Hostname: ha-671000-m02
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164336Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164336Ki
pods: 110
System Info:
Machine ID: cd0c52403e6948f895e68f7307e07d3c
System UUID: 294b4d68-0000-0000-b3f3-54381951a5e8
Boot ID: afe03ef7-7b17-481f-b318-67efdc00c911
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://26.0.2
Kubelet Version: v1.30.0
Kube-Proxy Version: v1.30.0
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-fc5497c4f-q27t4 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m38s
kube-system etcd-ha-671000-m02 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 7m9s
kube-system kindnet-kn94d 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 7m11s
kube-system kube-apiserver-ha-671000-m02 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m8s
kube-system kube-controller-manager-ha-671000-m02 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m9s
kube-system kube-proxy-5jwqs 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m11s
kube-system kube-scheduler-ha-671000-m02 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m9s
kube-system kube-vip-ha-671000-m02 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7m6s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 100m (5%!)(MISSING)
memory 150Mi (7%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m7s kube-proxy
Normal Starting 113s kube-proxy
Normal Starting 3m42s kube-proxy
Normal NodeHasSufficientMemory 7m11s (x8 over 7m11s) kubelet Node ha-671000-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m11s (x8 over 7m11s) kubelet Node ha-671000-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m11s (x7 over 7m11s) kubelet Node ha-671000-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 7m11s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 7m8s node-controller Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
Normal RegisteredNode 6m54s node-controller Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
Normal RegisteredNode 5m44s node-controller Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
Normal NodeAllocatableEnforced 3m45s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 3m45s kubelet Node ha-671000-m02 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m45s kubelet Node ha-671000-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m45s kubelet Node ha-671000-m02 status is now: NodeHasSufficientPID
Warning Rebooted 3m45s kubelet Node ha-671000-m02 has been rebooted, boot id: 4c58d033-04b8-4c15-8fdc-920ae431b3e3
Normal Starting 3m45s kubelet Starting kubelet.
Normal RegisteredNode 3m29s node-controller Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
Normal Starting 2m20s kubelet Starting kubelet.
Normal NodeHasNoDiskPressure 2m19s (x8 over 2m19s) kubelet Node ha-671000-m02 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 2m19s (x8 over 2m19s) kubelet Node ha-671000-m02 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 2m19s (x7 over 2m19s) kubelet Node ha-671000-m02 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m19s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 118s node-controller Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
Normal RegisteredNode 108s node-controller Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
Name: ha-671000-m03
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ha-671000-m03
kubernetes.io/os=linux
minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
minikube.k8s.io/name=ha-671000
minikube.k8s.io/primary=false
minikube.k8s.io/updated_at=2024_05_05T14_17_49_0700
minikube.k8s.io/version=v1.33.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 05 May 2024 21:17:46 +0000
Taints: node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unreachable:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: ha-671000-m03
AcquireTime: <unset>
RenewTime: Sun, 05 May 2024 21:20:19 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure Unknown Sun, 05 May 2024 21:18:16 +0000 Sun, 05 May 2024 21:22:29 +0000 NodeStatusUnknown Kubelet stopped posting node status.
DiskPressure Unknown Sun, 05 May 2024 21:18:16 +0000 Sun, 05 May 2024 21:22:29 +0000 NodeStatusUnknown Kubelet stopped posting node status.
PIDPressure Unknown Sun, 05 May 2024 21:18:16 +0000 Sun, 05 May 2024 21:22:29 +0000 NodeStatusUnknown Kubelet stopped posting node status.
Ready Unknown Sun, 05 May 2024 21:18:16 +0000 Sun, 05 May 2024 21:22:29 +0000 NodeStatusUnknown Kubelet stopped posting node status.
Addresses:
InternalIP: 192.169.0.53
Hostname: ha-671000-m03
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164336Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164336Ki
pods: 110
System Info:
Machine ID: 57e667ca3d044ecd8738fa77dd77fa8b
System UUID: be904905-0000-0000-ae38-2f481381ca7c
Boot ID: 8a14d3dc-4069-4d68-a1d0-b7b11fe06e54
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://26.0.2
Kubelet Version: v1.30.0
Kube-Proxy Version: v1.30.0
PodCIDR: 10.244.2.0/24
PodCIDRs: 10.244.2.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox-fc5497c4f-kr2jr 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m38s
kube-system etcd-ha-671000-m03 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (4%!)(MISSING) 0 (0%!)(MISSING) 5m59s
kube-system kindnet-cbt9x 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 6m1s
kube-system kube-apiserver-ha-671000-m03 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m59s
kube-system kube-controller-manager-ha-671000-m03 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m59s
kube-system kube-proxy-zwgd2 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m1s
kube-system kube-scheduler-ha-671000-m03 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m59s
kube-system kube-vip-ha-671000-m03 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m57s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 100m (5%!)(MISSING)
memory 150Mi (7%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 5m57s kube-proxy
Normal NodeHasSufficientMemory 6m1s (x8 over 6m1s) kubelet Node ha-671000-m03 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m1s (x8 over 6m1s) kubelet Node ha-671000-m03 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m1s (x7 over 6m1s) kubelet Node ha-671000-m03 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m1s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 5m59s node-controller Node ha-671000-m03 event: Registered Node ha-671000-m03 in Controller
Normal RegisteredNode 5m58s node-controller Node ha-671000-m03 event: Registered Node ha-671000-m03 in Controller
Normal RegisteredNode 5m44s node-controller Node ha-671000-m03 event: Registered Node ha-671000-m03 in Controller
Normal RegisteredNode 3m29s node-controller Node ha-671000-m03 event: Registered Node ha-671000-m03 in Controller
Normal RegisteredNode 118s node-controller Node ha-671000-m03 event: Registered Node ha-671000-m03 in Controller
Normal RegisteredNode 108s node-controller Node ha-671000-m03 event: Registered Node ha-671000-m03 in Controller
Normal NodeNotReady 78s node-controller Node ha-671000-m03 status is now: NodeNotReady
Name: ha-671000-m04
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=ha-671000-m04
kubernetes.io/os=linux
minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
minikube.k8s.io/name=ha-671000
minikube.k8s.io/primary=false
minikube.k8s.io/updated_at=2024_05_05T14_18_38_0700
minikube.k8s.io/version=v1.33.0
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 05 May 2024 21:18:38 +0000
Taints: node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unreachable:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: ha-671000-m04
AcquireTime: <unset>
RenewTime: Sun, 05 May 2024 21:20:20 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure Unknown Sun, 05 May 2024 21:19:15 +0000 Sun, 05 May 2024 21:22:29 +0000 NodeStatusUnknown Kubelet stopped posting node status.
DiskPressure Unknown Sun, 05 May 2024 21:19:15 +0000 Sun, 05 May 2024 21:22:29 +0000 NodeStatusUnknown Kubelet stopped posting node status.
PIDPressure Unknown Sun, 05 May 2024 21:19:15 +0000 Sun, 05 May 2024 21:22:29 +0000 NodeStatusUnknown Kubelet stopped posting node status.
Ready Unknown Sun, 05 May 2024 21:19:15 +0000 Sun, 05 May 2024 21:22:29 +0000 NodeStatusUnknown Kubelet stopped posting node status.
Addresses:
InternalIP: 192.169.0.54
Hostname: ha-671000-m04
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164336Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 2164336Ki
pods: 110
System Info:
Machine ID: d4981d8834c947ca92647a836bff839f
System UUID: 8d0f44c8-0000-0000-aaa8-77d77d483dce
Boot ID: 16c48acc-c76d-4b03-8b93-c113a1acb125
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://26.0.2
Kubelet Version: v1.30.0
Kube-Proxy Version: v1.30.0
PodCIDR: 10.244.3.0/24
PodCIDRs: 10.244.3.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kindnet-ffg2p 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING) 5m9s
kube-system kube-proxy-b45s6 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m5s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (5%!)(MISSING) 100m (5%!)(MISSING)
memory 50Mi (2%!)(MISSING) 50Mi (2%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 5m kube-proxy
Normal NodeHasSufficientPID 5m9s (x2 over 5m9s) kubelet Node ha-671000-m04 status is now: NodeHasSufficientPID
Normal RegisteredNode 5m9s node-controller Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
Normal NodeAllocatableEnforced 5m9s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 5m9s (x2 over 5m9s) kubelet Node ha-671000-m04 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m9s (x2 over 5m9s) kubelet Node ha-671000-m04 status is now: NodeHasNoDiskPressure
Normal RegisteredNode 5m8s node-controller Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
Normal RegisteredNode 5m4s node-controller Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
Normal NodeReady 4m32s kubelet Node ha-671000-m04 status is now: NodeReady
Normal RegisteredNode 3m29s node-controller Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
Normal RegisteredNode 118s node-controller Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
Normal RegisteredNode 108s node-controller Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
Normal NodeNotReady 78s node-controller Node ha-671000-m04 status is now: NodeNotReady
==> dmesg <==
[ +0.036177] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
[ +0.007984] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
[ +5.371215] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
[ +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
[ +0.006679] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +2.612826] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
[May 5 21:21] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +2.610406] systemd-fstab-generator[479]: Ignoring "noauto" option for root device
[ +0.095617] systemd-fstab-generator[491]: Ignoring "noauto" option for root device
[ +1.314538] kauditd_printk_skb: 42 callbacks suppressed
[ +0.655682] systemd-fstab-generator[1058]: Ignoring "noauto" option for root device
[ +0.256796] systemd-fstab-generator[1096]: Ignoring "noauto" option for root device
[ +0.100506] systemd-fstab-generator[1108]: Ignoring "noauto" option for root device
[ +0.111570] systemd-fstab-generator[1122]: Ignoring "noauto" option for root device
[ +2.444375] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
[ +0.102765] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
[ +0.091262] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
[ +0.136792] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
[ +0.441863] systemd-fstab-generator[1481]: Ignoring "noauto" option for root device
[ +6.939646] kauditd_printk_skb: 276 callbacks suppressed
[ +21.981272] kauditd_printk_skb: 40 callbacks suppressed
[May 5 21:22] kauditd_printk_skb: 25 callbacks suppressed
[ +5.342141] kauditd_printk_skb: 29 callbacks suppressed
==> etcd [06468c7f9764] <==
{"level":"warn","ts":"2024-05-05T21:23:21.591728Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:22.146429Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:22.146476Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:26.148455Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:26.148515Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:26.591902Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:26.591953Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:30.150682Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:30.150746Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:31.592757Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:31.592823Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:34.152847Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:34.152977Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:36.59348Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:36.593489Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:38.154487Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:38.154534Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:41.594251Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:41.59428Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:42.155735Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:42.155924Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:46.158028Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:46.158078Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:46.594975Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
{"level":"warn","ts":"2024-05-05T21:23:46.595025Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
==> etcd [5254e6584697] <==
{"level":"warn","ts":"2024-05-05T21:20:41.244715Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.517168037s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"","error":"context canceled"}
2024/05/05 21:20:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
{"level":"info","ts":"2024-05-05T21:20:41.244728Z","caller":"traceutil/trace.go:171","msg":"trace[1070592193] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"7.517242865s","start":"2024-05-05T21:20:33.727481Z","end":"2024-05-05T21:20:41.244724Z","steps":["trace[1070592193] 'agreement among raft nodes before linearized reading' (duration: 7.517229047s)"],"step_count":1}
{"level":"warn","ts":"2024-05-05T21:20:41.244739Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:20:33.727472Z","time spent":"7.517264459s","remote":"127.0.0.1:52468","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
2024/05/05 21:20:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
{"level":"warn","ts":"2024-05-05T21:20:41.318319Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.51:2379: use of closed network connection"}
{"level":"warn","ts":"2024-05-05T21:20:41.318441Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.51:2379: use of closed network connection"}
{"level":"info","ts":"2024-05-05T21:20:41.318529Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"1792221d12ca7193","current-leader-member-id":"0"}
{"level":"info","ts":"2024-05-05T21:20:41.318575Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"33f5589d0a9a0d8f"}
{"level":"info","ts":"2024-05-05T21:20:41.318613Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"33f5589d0a9a0d8f"}
{"level":"info","ts":"2024-05-05T21:20:41.318632Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"33f5589d0a9a0d8f"}
{"level":"info","ts":"2024-05-05T21:20:41.318702Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1792221d12ca7193","remote-peer-id":"33f5589d0a9a0d8f"}
{"level":"info","ts":"2024-05-05T21:20:41.318726Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1792221d12ca7193","remote-peer-id":"33f5589d0a9a0d8f"}
{"level":"info","ts":"2024-05-05T21:20:41.318811Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1792221d12ca7193","remote-peer-id":"33f5589d0a9a0d8f"}
{"level":"info","ts":"2024-05-05T21:20:41.318844Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"33f5589d0a9a0d8f"}
{"level":"info","ts":"2024-05-05T21:20:41.318852Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c5e392ded2f33250"}
{"level":"info","ts":"2024-05-05T21:20:41.318878Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c5e392ded2f33250"}
{"level":"info","ts":"2024-05-05T21:20:41.318893Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c5e392ded2f33250"}
{"level":"info","ts":"2024-05-05T21:20:41.319101Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250"}
{"level":"info","ts":"2024-05-05T21:20:41.319165Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250"}
{"level":"info","ts":"2024-05-05T21:20:41.319193Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250"}
{"level":"info","ts":"2024-05-05T21:20:41.319239Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c5e392ded2f33250"}
{"level":"info","ts":"2024-05-05T21:20:41.320696Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.51:2380"}
{"level":"info","ts":"2024-05-05T21:20:41.320808Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.51:2380"}
{"level":"info","ts":"2024-05-05T21:20:41.320835Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-671000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.51:2380"],"advertise-client-urls":["https://192.169.0.51:2379"]}
==> kernel <==
21:23:48 up 2 min, 0 users, load average: 0.38, 0.30, 0.12
Linux ha-671000 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kindnet [1a1434eaae36] <==
I0505 21:19:55.731657 1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24]
I0505 21:20:05.736429 1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
I0505 21:20:05.736525 1 main.go:227] handling current node
I0505 21:20:05.736552 1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
I0505 21:20:05.736689 1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24]
I0505 21:20:05.736923 1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
I0505 21:20:05.736977 1 main.go:250] Node ha-671000-m03 has CIDR [10.244.2.0/24]
I0505 21:20:05.737155 1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
I0505 21:20:05.737283 1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24]
I0505 21:20:15.745695 1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
I0505 21:20:15.745995 1 main.go:227] handling current node
I0505 21:20:15.746046 1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
I0505 21:20:15.746126 1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24]
I0505 21:20:15.746307 1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
I0505 21:20:15.746355 1 main.go:250] Node ha-671000-m03 has CIDR [10.244.2.0/24]
I0505 21:20:15.746485 1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
I0505 21:20:15.746532 1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24]
I0505 21:20:25.759299 1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
I0505 21:20:25.759513 1 main.go:227] handling current node
I0505 21:20:25.759563 1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
I0505 21:20:25.759608 1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24]
I0505 21:20:25.759700 1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
I0505 21:20:25.759814 1 main.go:250] Node ha-671000-m03 has CIDR [10.244.2.0/24]
I0505 21:20:25.759945 1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
I0505 21:20:25.759992 1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24]
==> kindnet [c048dc81e639] <==
I0505 21:23:10.599027 1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24]
I0505 21:23:20.608994 1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
I0505 21:23:20.609285 1 main.go:227] handling current node
I0505 21:23:20.609469 1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
I0505 21:23:20.609541 1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24]
I0505 21:23:20.609681 1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
I0505 21:23:20.609741 1 main.go:250] Node ha-671000-m03 has CIDR [10.244.2.0/24]
I0505 21:23:20.610023 1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
I0505 21:23:20.610110 1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24]
I0505 21:23:30.618901 1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
I0505 21:23:30.619021 1 main.go:227] handling current node
I0505 21:23:30.619044 1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
I0505 21:23:30.619070 1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24]
I0505 21:23:30.619227 1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
I0505 21:23:30.619254 1 main.go:250] Node ha-671000-m03 has CIDR [10.244.2.0/24]
I0505 21:23:30.619356 1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
I0505 21:23:30.619383 1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24]
I0505 21:23:40.633008 1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
I0505 21:23:40.633100 1 main.go:227] handling current node
I0505 21:23:40.633177 1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
I0505 21:23:40.633333 1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24]
I0505 21:23:40.633697 1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
I0505 21:23:40.633810 1 main.go:250] Node ha-671000-m03 has CIDR [10.244.2.0/24]
I0505 21:23:40.634043 1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
I0505 21:23:40.634273 1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24]
==> kube-apiserver [0faa6b8c33eb] <==
I0505 21:21:37.291123 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0505 21:21:37.291359 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0505 21:21:37.274777 1 aggregator.go:163] waiting for initial CRD sync...
I0505 21:21:37.375644 1 shared_informer.go:320] Caches are synced for configmaps
I0505 21:21:37.375925 1 apf_controller.go:379] Running API Priority and Fairness config worker
I0505 21:21:37.375971 1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
I0505 21:21:37.377200 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0505 21:21:37.378817 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0505 21:21:37.381581 1 shared_informer.go:320] Caches are synced for crd-autoregister
I0505 21:21:37.377409 1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
I0505 21:21:37.381892 1 handler_discovery.go:447] Starting ResourceDiscoveryManager
I0505 21:21:37.382046 1 aggregator.go:165] initial CRD sync complete...
I0505 21:21:37.382198 1 autoregister_controller.go:141] Starting autoregister controller
I0505 21:21:37.382286 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0505 21:21:37.382435 1 cache.go:39] Caches are synced for autoregister controller
W0505 21:21:37.393655 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.53]
I0505 21:21:37.416822 1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
I0505 21:21:37.416834 1 shared_informer.go:320] Caches are synced for node_authorizer
I0505 21:21:37.417065 1 policy_source.go:224] refreshing policies
I0505 21:21:37.456433 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0505 21:21:37.495739 1 controller.go:615] quota admission added evaluator for: endpoints
I0505 21:21:37.501072 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
E0505 21:21:37.503150 1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
I0505 21:21:38.282464 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W0505 21:21:38.614946 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.51 192.169.0.52 192.169.0.53]
==> kube-apiserver [52585f49ef66] <==
W0505 21:20:41.280549 1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.280601 1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.280629 1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.280682 1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.280709 1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.280761 1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.280789 1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.280843 1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.280871 1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.280923 1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.280951 1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.281002 1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.281029 1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.281054 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
E0505 21:20:41.281265 1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 1.492664ms, panicked: false, err: rpc error: code = Unknown desc = malformed header: missing HTTP content-type, panic-reason: <nil>
W0505 21:20:41.284566 1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.284618 1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.284660 1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.284759 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.285529 1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.285564 1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.285594 1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0505 21:20:41.285900 1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
E0505 21:20:41.286124 1 timeout.go:142] post-timeout activity - time-elapsed: 149.222533ms, GET "/readyz" result: <nil>
I0505 21:20:41.286844 1 controller.go:128] Shutting down kubernetes service endpoint reconciler
==> kube-controller-manager [64c952108db1] <==
I0505 21:21:59.982133 1 shared_informer.go:320] Caches are synced for disruption
I0505 21:22:00.000358 1 shared_informer.go:320] Caches are synced for deployment
I0505 21:22:00.007804 1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
I0505 21:22:00.024496 1 shared_informer.go:320] Caches are synced for resource quota
I0505 21:22:00.035366 1 shared_informer.go:320] Caches are synced for ReplicaSet
I0505 21:22:00.035542 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.697µs"
I0505 21:22:00.035653 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="137.077µs"
I0505 21:22:00.070482 1 shared_informer.go:320] Caches are synced for resource quota
I0505 21:22:00.445610 1 shared_informer.go:320] Caches are synced for garbage collector
I0505 21:22:00.453488 1 shared_informer.go:320] Caches are synced for garbage collector
I0505 21:22:00.453531 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
I0505 21:22:05.511091 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.295484ms"
I0505 21:22:05.511370 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.644µs"
I0505 21:22:21.210161 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47µs"
I0505 21:22:22.203561 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.395µs"
I0505 21:22:29.671409 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.983559ms"
I0505 21:22:29.671803 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="344.603µs"
I0505 21:22:34.895317 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.354µs"
I0505 21:22:34.945918 1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-qfwk6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-qfwk6\": the object has been modified; please apply your changes to the latest version and try again"
I0505 21:22:34.946345 1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"bea99034-e1b7-4a88-8a06-fbc74abeaaf9", APIVersion:"v1", ResourceVersion:"296", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-qfwk6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-qfwk6": the object has been modified; please apply your changes to the latest version and try again
I0505 21:22:34.949671 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.865154ms"
I0505 21:22:34.950019 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.905µs"
I0505 21:22:36.927342 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="78.051µs"
I0505 21:22:36.944792 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.116942ms"
I0505 21:22:36.945091 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.255µs"
==> kube-controller-manager [d51ddba3901b] <==
I0505 21:21:17.233998 1 serving.go:380] Generated self-signed cert in-memory
I0505 21:21:17.699254 1 controllermanager.go:189] "Starting" version="v1.30.0"
I0505 21:21:17.699295 1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0505 21:21:17.702300 1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
I0505 21:21:17.704596 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0505 21:21:17.704681 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0505 21:21:17.704829 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
E0505 21:21:37.707829 1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-n
amespaces-controller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
==> kube-proxy [2de2ad908033] <==
I0505 21:15:42.197467 1 server_linux.go:69] "Using iptables proxy"
I0505 21:15:42.206342 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.51"]
I0505 21:15:42.233495 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
I0505 21:15:42.233528 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0505 21:15:42.233540 1 server_linux.go:165] "Using iptables Proxier"
I0505 21:15:42.235848 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0505 21:15:42.236234 1 server.go:872] "Version info" version="v1.30.0"
I0505 21:15:42.236321 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0505 21:15:42.237244 1 config.go:101] "Starting endpoint slice config controller"
I0505 21:15:42.237489 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0505 21:15:42.237528 1 config.go:192] "Starting service config controller"
I0505 21:15:42.237533 1 shared_informer.go:313] Waiting for caches to sync for service config
I0505 21:15:42.237620 1 config.go:319] "Starting node config controller"
I0505 21:15:42.237748 1 shared_informer.go:313] Waiting for caches to sync for node config
I0505 21:15:42.338371 1 shared_informer.go:320] Caches are synced for service config
I0505 21:15:42.338453 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0505 21:15:42.338567 1 shared_informer.go:320] Caches are synced for node config
==> kube-proxy [7001a9c78d0a] <==
I0505 21:22:05.427749 1 server_linux.go:69] "Using iptables proxy"
I0505 21:22:05.441644 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.51"]
I0505 21:22:05.545461 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
I0505 21:22:05.545682 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0505 21:22:05.545778 1 server_linux.go:165] "Using iptables Proxier"
I0505 21:22:05.548756 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0505 21:22:05.549189 1 server.go:872] "Version info" version="v1.30.0"
I0505 21:22:05.549278 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0505 21:22:05.551545 1 config.go:192] "Starting service config controller"
I0505 21:22:05.551674 1 shared_informer.go:313] Waiting for caches to sync for service config
I0505 21:22:05.551761 1 config.go:101] "Starting endpoint slice config controller"
I0505 21:22:05.551848 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0505 21:22:05.552969 1 config.go:319] "Starting node config controller"
I0505 21:22:05.553109 1 shared_informer.go:313] Waiting for caches to sync for node config
I0505 21:22:05.652764 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0505 21:22:05.652801 1 shared_informer.go:320] Caches are synced for service config
I0505 21:22:05.653231 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [09b069cddaf0] <==
I0505 21:21:17.140666 1 serving.go:380] Generated self-signed cert in-memory
W0505 21:21:27.959721 1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.169.0.51:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
W0505 21:21:27.959770 1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
W0505 21:21:27.959776 1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0505 21:21:37.325220 1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
I0505 21:21:37.325291 1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0505 21:21:37.336314 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
I0505 21:21:37.337352 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0505 21:21:37.337505 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0505 21:21:37.341283 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0505 21:21:37.438307 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [0f13fc419c3a] <==
I0505 21:18:38.425370 1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ffg2p" node="ha-671000-m04"
E0505 21:18:38.428127 1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tgdtz\": pod kube-proxy-tgdtz is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tgdtz" node="ha-671000-m04"
E0505 21:18:38.428397 1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f5f9b9e4-4771-49af-a1e4-37910d8267a4(kube-system/kube-proxy-tgdtz) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tgdtz"
E0505 21:18:38.428585 1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tgdtz\": pod kube-proxy-tgdtz is already assigned to node \"ha-671000-m04\"" pod="kube-system/kube-proxy-tgdtz"
I0505 21:18:38.428695 1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tgdtz" node="ha-671000-m04"
E0505 21:18:38.442949 1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-66l5l\": pod kindnet-66l5l is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-66l5l" node="ha-671000-m04"
E0505 21:18:38.443283 1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4f688ff7-efff-4775-9a88-d954e81852f5(kube-system/kindnet-66l5l) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-66l5l"
E0505 21:18:38.443527 1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-66l5l\": pod kindnet-66l5l is already assigned to node \"ha-671000-m04\"" pod="kube-system/kindnet-66l5l"
I0505 21:18:38.443685 1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-66l5l" node="ha-671000-m04"
E0505 21:18:38.443578 1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xvf68\": pod kube-proxy-xvf68 is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xvf68" node="ha-671000-m04"
E0505 21:18:38.444183 1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 24a52ab7-73e5-4d91-810b-a2260dae577f(kube-system/kube-proxy-xvf68) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-xvf68"
E0505 21:18:38.444289 1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xvf68\": pod kube-proxy-xvf68 is already assigned to node \"ha-671000-m04\"" pod="kube-system/kube-proxy-xvf68"
I0505 21:18:38.444408 1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-xvf68" node="ha-671000-m04"
E0505 21:18:38.489548 1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-sbspd\": pod kindnet-sbspd is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-sbspd" node="ha-671000-m04"
E0505 21:18:38.489803 1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod afb510c4-ddf4-4844-bdf5-80343510ecb8(kube-system/kindnet-sbspd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-sbspd"
E0505 21:18:38.490102 1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-sbspd\": pod kindnet-sbspd is already assigned to node \"ha-671000-m04\"" pod="kube-system/kindnet-sbspd"
I0505 21:18:38.490296 1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-sbspd" node="ha-671000-m04"
E0505 21:18:38.499960 1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rldf7\": pod kube-proxy-rldf7 is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rldf7" node="ha-671000-m04"
E0505 21:18:38.500590 1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f733f40c-9915-44e5-8f24-9f4101633739(kube-system/kube-proxy-rldf7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-rldf7"
E0505 21:18:38.501561 1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rldf7\": pod kube-proxy-rldf7 is already assigned to node \"ha-671000-m04\"" pod="kube-system/kube-proxy-rldf7"
I0505 21:18:38.501767 1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rldf7" node="ha-671000-m04"
E0505 21:18:40.483901 1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fntvj\": pod kube-proxy-fntvj is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fntvj" node="ha-671000-m04"
E0505 21:18:40.483990 1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fntvj\": pod kube-proxy-fntvj is already assigned to node \"ha-671000-m04\"" pod="kube-system/kube-proxy-fntvj"
I0505 21:18:40.484875 1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fntvj" node="ha-671000-m04"
E0505 21:20:41.266642 1 run.go:74] "command failed" err="finished without leader elect"
==> kubelet <==
May 05 21:22:09 ha-671000 kubelet[1488]: I0505 21:22:09.221758 1488 scope.go:117] "RemoveContainer" containerID="f51438bee6679e498856deddc1a03d6233f30f95098fa5a3ec5c95988f53adbd"
May 05 21:22:21 ha-671000 kubelet[1488]: I0505 21:22:21.192016 1488 scope.go:117] "RemoveContainer" containerID="aa3ff28b7c9017843d8d888a429ee706bd6460febccb79e8787320e99efbdfa4"
May 05 21:22:21 ha-671000 kubelet[1488]: E0505 21:22:21.192254 1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-7db6d8ff4d-kjf54_kube-system(c780145e-9d82-4451-94e8-dee09a63eadb)\"" pod="kube-system/coredns-7db6d8ff4d-kjf54" podUID="c780145e-9d82-4451-94e8-dee09a63eadb"
May 05 21:22:22 ha-671000 kubelet[1488]: I0505 21:22:22.192271 1488 scope.go:117] "RemoveContainer" containerID="bfe23d4afc2313a26ae10b34970e899d74fe1e0f1c01bf9df2058c578bac6bf1"
May 05 21:22:22 ha-671000 kubelet[1488]: E0505 21:22:22.192522 1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-7db6d8ff4d-hqtd2_kube-system(e76b43f2-8189-4e5d-adc3-ced554e9ee07)\"" pod="kube-system/coredns-7db6d8ff4d-hqtd2" podUID="e76b43f2-8189-4e5d-adc3-ced554e9ee07"
May 05 21:22:34 ha-671000 kubelet[1488]: I0505 21:22:34.191629 1488 scope.go:117] "RemoveContainer" containerID="aa3ff28b7c9017843d8d888a429ee706bd6460febccb79e8787320e99efbdfa4"
May 05 21:22:34 ha-671000 kubelet[1488]: I0505 21:22:34.865379 1488 scope.go:117] "RemoveContainer" containerID="797ed8f77f01f6ba02573542d48c7a31705a8fe5b3efed78400f7de2a56d9358"
May 05 21:22:34 ha-671000 kubelet[1488]: I0505 21:22:34.865674 1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
May 05 21:22:34 ha-671000 kubelet[1488]: E0505 21:22:34.865777 1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
May 05 21:22:36 ha-671000 kubelet[1488]: I0505 21:22:36.192222 1488 scope.go:117] "RemoveContainer" containerID="bfe23d4afc2313a26ae10b34970e899d74fe1e0f1c01bf9df2058c578bac6bf1"
May 05 21:22:49 ha-671000 kubelet[1488]: I0505 21:22:49.192583 1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
May 05 21:22:49 ha-671000 kubelet[1488]: E0505 21:22:49.193087 1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
May 05 21:23:02 ha-671000 kubelet[1488]: I0505 21:23:02.191713 1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
May 05 21:23:02 ha-671000 kubelet[1488]: E0505 21:23:02.192199 1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
May 05 21:23:09 ha-671000 kubelet[1488]: E0505 21:23:09.208918 1488 iptables.go:577] "Could not set up iptables canary" err=<
May 05 21:23:09 ha-671000 kubelet[1488]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
May 05 21:23:09 ha-671000 kubelet[1488]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
May 05 21:23:09 ha-671000 kubelet[1488]: Perhaps ip6tables or your kernel needs to be upgraded.
May 05 21:23:09 ha-671000 kubelet[1488]: > table="nat" chain="KUBE-KUBELET-CANARY"
May 05 21:23:14 ha-671000 kubelet[1488]: I0505 21:23:14.191788 1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
May 05 21:23:14 ha-671000 kubelet[1488]: E0505 21:23:14.192304 1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
May 05 21:23:29 ha-671000 kubelet[1488]: I0505 21:23:29.193869 1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
May 05 21:23:29 ha-671000 kubelet[1488]: E0505 21:23:29.194441 1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
May 05 21:23:40 ha-671000 kubelet[1488]: I0505 21:23:40.191896 1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
May 05 21:23:40 ha-671000 kubelet[1488]: E0505 21:23:40.192265 1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-671000 -n ha-671000
helpers_test.go:261: (dbg) Run: kubectl --context ha-671000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (208.07s)