Test Report: Hyperkit_macOS 18602

                    
                      f0f00e4b78df34cc802665249d4ea4180b698205:2024-05-05:34338
                    
                

Test fail (4/336)

Order failed test Duration
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 208.07
172 TestMultiControlPlane/serial/DeleteSecondaryNode 13.25
177 TestMultiControlPlane/serial/AddSecondaryNode 427.09
294 TestPause/serial/SecondStartNoReconfiguration 194.67
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (208.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-671000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-671000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-671000 -v=7 --alsologtostderr: (27.186695254s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-671000 --wait=true -v=7 --alsologtostderr
E0505 14:21:07.678588   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 14:22:31.483471   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:23:23.841471   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-671000 --wait=true -v=7 --alsologtostderr: exit status 90 (2m56.98572553s)

                                                
                                                
-- stdout --
	* [ha-671000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-671000" primary control-plane node in "ha-671000" cluster
	* Restarting existing hyperkit VM for "ha-671000" ...
	* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	* Enabled addons: 
	
	* Starting "ha-671000-m02" control-plane node in "ha-671000" cluster
	* Restarting existing hyperkit VM for "ha-671000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.51
	* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	  - env NO_PROXY=192.169.0.51
	* Verifying Kubernetes components...
	
	* Starting "ha-671000-m03" control-plane node in "ha-671000" cluster
	* Restarting existing hyperkit VM for "ha-671000-m03" ...
	* Found network options:
	  - NO_PROXY=192.169.0.51,192.169.0.52
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:20:48.965096   56262 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:20:48.965304   56262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:20:48.965309   56262 out.go:304] Setting ErrFile to fd 2...
	I0505 14:20:48.965313   56262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:20:48.965501   56262 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
	I0505 14:20:48.966984   56262 out.go:298] Setting JSON to false
	I0505 14:20:48.991851   56262 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":19219,"bootTime":1714924829,"procs":425,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0505 14:20:48.991949   56262 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:20:49.013239   56262 out.go:177] * [ha-671000] minikube v1.33.0 on Darwin 14.4.1
	I0505 14:20:49.055173   56262 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:20:49.055223   56262 notify.go:220] Checking for updates...
	I0505 14:20:49.077109   56262 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:20:49.097964   56262 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0505 14:20:49.119233   56262 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:20:49.139935   56262 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
	I0505 14:20:49.161146   56262 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:20:49.182881   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:20:49.183046   56262 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:20:49.183689   56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:20:49.183764   56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:20:49.193369   56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57871
	I0505 14:20:49.193700   56262 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:20:49.194120   56262 main.go:141] libmachine: Using API Version  1
	I0505 14:20:49.194134   56262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:20:49.194326   56262 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:20:49.194462   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:20:49.223183   56262 out.go:177] * Using the hyperkit driver based on existing profile
	I0505 14:20:49.265211   56262 start.go:297] selected driver: hyperkit
	I0505 14:20:49.265249   56262 start.go:901] validating driver "hyperkit" against &{Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:20:49.265473   56262 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:20:49.265691   56262 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:20:49.265889   56262 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18602-53665/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0505 14:20:49.275605   56262 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0505 14:20:49.280711   56262 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:20:49.280731   56262 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0505 14:20:49.284127   56262 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:20:49.284202   56262 cni.go:84] Creating CNI manager for ""
	I0505 14:20:49.284211   56262 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0505 14:20:49.284292   56262 start.go:340] cluster config:
	{Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false he
lm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:20:49.284394   56262 iso.go:125] acquiring lock: {Name:mk0da19ac8d2d553b5039d86a6857a5ca35625a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:20:49.326088   56262 out.go:177] * Starting "ha-671000" primary control-plane node in "ha-671000" cluster
	I0505 14:20:49.347002   56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:20:49.347074   56262 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0505 14:20:49.347098   56262 cache.go:56] Caching tarball of preloaded images
	I0505 14:20:49.347288   56262 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0505 14:20:49.347306   56262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:20:49.347472   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:20:49.348516   56262 start.go:360] acquireMachinesLock for ha-671000: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:20:49.348656   56262 start.go:364] duration metric: took 99.405µs to acquireMachinesLock for "ha-671000"
	I0505 14:20:49.348707   56262 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:20:49.348726   56262 fix.go:54] fixHost starting: 
	I0505 14:20:49.349125   56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:20:49.349160   56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:20:49.358523   56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57873
	I0505 14:20:49.358884   56262 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:20:49.359279   56262 main.go:141] libmachine: Using API Version  1
	I0505 14:20:49.359298   56262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:20:49.359523   56262 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:20:49.359669   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:20:49.359788   56262 main.go:141] libmachine: (ha-671000) Calling .GetState
	I0505 14:20:49.359894   56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:20:49.359963   56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 55694
	I0505 14:20:49.360866   56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid 55694 missing from process table
	I0505 14:20:49.360926   56262 fix.go:112] recreateIfNeeded on ha-671000: state=Stopped err=<nil>
	I0505 14:20:49.360950   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	W0505 14:20:49.361041   56262 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:20:49.402877   56262 out.go:177] * Restarting existing hyperkit VM for "ha-671000" ...
	I0505 14:20:49.423939   56262 main.go:141] libmachine: (ha-671000) Calling .Start
	I0505 14:20:49.424311   56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:20:49.424354   56262 main.go:141] libmachine: (ha-671000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid
	I0505 14:20:49.426302   56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid 55694 missing from process table
	I0505 14:20:49.426313   56262 main.go:141] libmachine: (ha-671000) DBG | pid 55694 is in state "Stopped"
	I0505 14:20:49.426344   56262 main.go:141] libmachine: (ha-671000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid...
	I0505 14:20:49.426771   56262 main.go:141] libmachine: (ha-671000) DBG | Using UUID 9389e317-b0a3-4e2d-8cc9-aa1a138ddf96
	I0505 14:20:49.551381   56262 main.go:141] libmachine: (ha-671000) DBG | Generated MAC 72:52:a3:7d:5c:d1
	I0505 14:20:49.551411   56262 main.go:141] libmachine: (ha-671000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
	I0505 14:20:49.551646   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9389e317-b0a3-4e2d-8cc9-aa1a138ddf96", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037f290)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:20:49.551692   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9389e317-b0a3-4e2d-8cc9-aa1a138ddf96", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037f290)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:20:49.551780   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9389e317-b0a3-4e2d-8cc9-aa1a138ddf96", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/ha-671000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
	I0505 14:20:49.551846   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9389e317-b0a3-4e2d-8cc9-aa1a138ddf96 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/ha-671000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
	I0505 14:20:49.551864   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0505 14:20:49.553184   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Pid is 56275
	I0505 14:20:49.553639   56262 main.go:141] libmachine: (ha-671000) DBG | Attempt 0
	I0505 14:20:49.553663   56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:20:49.553735   56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 56275
	I0505 14:20:49.555494   56262 main.go:141] libmachine: (ha-671000) DBG | Searching for 72:52:a3:7d:5c:d1 in /var/db/dhcpd_leases ...
	I0505 14:20:49.555595   56262 main.go:141] libmachine: (ha-671000) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0505 14:20:49.555611   56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x6637f817}
	I0505 14:20:49.555629   56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394976}
	I0505 14:20:49.555648   56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x663948d2}
	I0505 14:20:49.555661   56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x66394853}
	I0505 14:20:49.555667   56262 main.go:141] libmachine: (ha-671000) DBG | Found match: 72:52:a3:7d:5c:d1
	I0505 14:20:49.555674   56262 main.go:141] libmachine: (ha-671000) DBG | IP: 192.169.0.51
	I0505 14:20:49.555696   56262 main.go:141] libmachine: (ha-671000) Calling .GetConfigRaw
	I0505 14:20:49.556342   56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
	I0505 14:20:49.556516   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:20:49.556975   56262 machine.go:94] provisionDockerMachine start ...
	I0505 14:20:49.556985   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:20:49.557119   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:20:49.557222   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:20:49.557336   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:20:49.557465   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:20:49.557602   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:20:49.557742   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:20:49.557972   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:20:49.557981   56262 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 14:20:49.561305   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0505 14:20:49.617858   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0505 14:20:49.618520   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:20:49.618541   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:20:49.618548   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:20:49.618556   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:20:50.003923   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0505 14:20:50.003954   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0505 14:20:50.118574   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:20:50.118591   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:20:50.118604   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:20:50.118620   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:20:50.119491   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0505 14:20:50.119502   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0505 14:20:55.386088   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0505 14:20:55.386105   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0505 14:20:55.386124   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0505 14:20:55.410129   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0505 14:20:59.165992   56262 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.51:22: connect: connection refused
	I0505 14:21:02.226047   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 14:21:02.226063   56262 main.go:141] libmachine: (ha-671000) Calling .GetMachineName
	I0505 14:21:02.226198   56262 buildroot.go:166] provisioning hostname "ha-671000"
	I0505 14:21:02.226208   56262 main.go:141] libmachine: (ha-671000) Calling .GetMachineName
	I0505 14:21:02.226303   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.226392   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:02.226492   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.226582   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.226673   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:02.226801   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:02.226937   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:21:02.226945   56262 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671000 && echo "ha-671000" | sudo tee /etc/hostname
	I0505 14:21:02.297369   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000
	
	I0505 14:21:02.297395   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.297543   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:02.297643   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.297751   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.297848   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:02.297983   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:02.298121   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:21:02.298132   56262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 14:21:02.363709   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 14:21:02.363736   56262 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
	I0505 14:21:02.363757   56262 buildroot.go:174] setting up certificates
	I0505 14:21:02.363764   56262 provision.go:84] configureAuth start
	I0505 14:21:02.363771   56262 main.go:141] libmachine: (ha-671000) Calling .GetMachineName
	I0505 14:21:02.363911   56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
	I0505 14:21:02.364012   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.364108   56262 provision.go:143] copyHostCerts
	I0505 14:21:02.364139   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:21:02.364208   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
	I0505 14:21:02.364216   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:21:02.364363   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
	I0505 14:21:02.364576   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:21:02.364616   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
	I0505 14:21:02.364621   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:21:02.364702   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
	I0505 14:21:02.364858   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:21:02.364899   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
	I0505 14:21:02.364904   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:21:02.364979   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
	I0505 14:21:02.365133   56262 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000 san=[127.0.0.1 192.169.0.51 ha-671000 localhost minikube]
	I0505 14:21:02.566783   56262 provision.go:177] copyRemoteCerts
	I0505 14:21:02.566851   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 14:21:02.566867   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.567002   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:02.567081   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.567166   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:02.567249   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:21:02.603993   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 14:21:02.604064   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 14:21:02.623864   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 14:21:02.623931   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0505 14:21:02.642984   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 14:21:02.643054   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0505 14:21:02.662651   56262 provision.go:87] duration metric: took 298.874135ms to configureAuth
	I0505 14:21:02.662663   56262 buildroot.go:189] setting minikube options for container-runtime
	I0505 14:21:02.662832   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:21:02.662845   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:02.662976   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.663065   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:02.663164   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.663269   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.663357   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:02.663467   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:02.663594   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:21:02.663602   56262 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 14:21:02.721847   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 14:21:02.721864   56262 buildroot.go:70] root file system type: tmpfs
	I0505 14:21:02.721944   56262 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 14:21:02.721957   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.722094   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:02.722182   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.722290   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.722379   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:02.722504   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:02.722641   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:21:02.722685   56262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 14:21:02.791477   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 14:21:02.791499   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.791628   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:02.791713   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.791806   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.791895   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:02.792000   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:02.792138   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:21:02.792148   56262 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 14:21:04.463791   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0505 14:21:04.463805   56262 machine.go:97] duration metric: took 14.90688888s to provisionDockerMachine
	I0505 14:21:04.463814   56262 start.go:293] postStartSetup for "ha-671000" (driver="hyperkit")
	I0505 14:21:04.463821   56262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 14:21:04.463832   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:04.464011   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 14:21:04.464034   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:04.464144   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:04.464235   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:04.464343   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:04.464431   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:21:04.510297   56262 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 14:21:04.514333   56262 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 14:21:04.514346   56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
	I0505 14:21:04.514446   56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
	I0505 14:21:04.514637   56262 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
	I0505 14:21:04.514644   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
	I0505 14:21:04.514851   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 14:21:04.528097   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:21:04.557607   56262 start.go:296] duration metric: took 93.785206ms for postStartSetup
	I0505 14:21:04.557630   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:04.557802   56262 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0505 14:21:04.557815   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:04.557914   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:04.558026   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:04.558104   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:04.558180   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:21:04.595384   56262 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0505 14:21:04.595439   56262 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0505 14:21:04.627954   56262 fix.go:56] duration metric: took 15.279298664s for fixHost
	I0505 14:21:04.627978   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:04.628106   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:04.628210   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:04.628316   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:04.628400   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:04.628519   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:04.628664   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:21:04.628671   56262 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0505 14:21:04.687788   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944064.851392424
	
	I0505 14:21:04.687801   56262 fix.go:216] guest clock: 1714944064.851392424
	I0505 14:21:04.687806   56262 fix.go:229] Guest: 2024-05-05 14:21:04.851392424 -0700 PDT Remote: 2024-05-05 14:21:04.627967 -0700 PDT m=+15.708271847 (delta=223.425424ms)
	I0505 14:21:04.687822   56262 fix.go:200] guest clock delta is within tolerance: 223.425424ms
	I0505 14:21:04.687828   56262 start.go:83] releasing machines lock for "ha-671000", held for 15.339229169s
	I0505 14:21:04.687844   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:04.687975   56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
	I0505 14:21:04.688073   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:04.688362   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:04.688461   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:04.688537   56262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 14:21:04.688563   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:04.688585   56262 ssh_runner.go:195] Run: cat /version.json
	I0505 14:21:04.688594   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:04.688666   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:04.688681   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:04.688776   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:04.688794   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:04.688857   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:04.688870   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:04.688932   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:21:04.688951   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:21:04.773179   56262 ssh_runner.go:195] Run: systemctl --version
	I0505 14:21:04.778074   56262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 14:21:04.782225   56262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 14:21:04.782267   56262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 14:21:04.795505   56262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 14:21:04.795515   56262 start.go:494] detecting cgroup driver to use...
	I0505 14:21:04.795626   56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:21:04.813193   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0505 14:21:04.822043   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 14:21:04.830859   56262 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 14:21:04.830912   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 14:21:04.839650   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:21:04.848348   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 14:21:04.857332   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:21:04.866100   56262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 14:21:04.874955   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 14:21:04.883995   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 14:21:04.892686   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 14:21:04.901641   56262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 14:21:04.909531   56262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 14:21:04.917434   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:05.025345   56262 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 14:21:05.045401   56262 start.go:494] detecting cgroup driver to use...
	I0505 14:21:05.045483   56262 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 14:21:05.056970   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:21:05.067558   56262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 14:21:05.082472   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:21:05.093595   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:21:05.104660   56262 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0505 14:21:05.123434   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:21:05.136644   56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:21:05.151834   56262 ssh_runner.go:195] Run: which cri-dockerd
	I0505 14:21:05.154642   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 14:21:05.162375   56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 14:21:05.175761   56262 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 14:21:05.270844   56262 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 14:21:05.375810   56262 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 14:21:05.375883   56262 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 14:21:05.390245   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:05.495960   56262 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:21:07.797662   56262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.301692609s)
	I0505 14:21:07.797733   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0505 14:21:07.809357   56262 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0505 14:21:07.822066   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:21:07.832350   56262 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0505 14:21:07.930252   56262 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0505 14:21:08.029360   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:08.124190   56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0505 14:21:08.137986   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:21:08.149027   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:08.258895   56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0505 14:21:08.326102   56262 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0505 14:21:08.326177   56262 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0505 14:21:08.330736   56262 start.go:562] Will wait 60s for crictl version
	I0505 14:21:08.330787   56262 ssh_runner.go:195] Run: which crictl
	I0505 14:21:08.333926   56262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 14:21:08.360867   56262 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0505 14:21:08.360957   56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:21:08.380536   56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:21:08.444390   56262 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0505 14:21:08.444441   56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
	I0505 14:21:08.444833   56262 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0505 14:21:08.449245   56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:21:08.459088   56262 kubeadm.go:877] updating cluster {Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:fal
se freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 14:21:08.459178   56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:21:08.459237   56262 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0505 14:21:08.472336   56262 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	ghcr.io/kube-vip/kube-vip:v0.7.1
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0505 14:21:08.472348   56262 docker.go:615] Images already preloaded, skipping extraction
	I0505 14:21:08.472419   56262 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0505 14:21:08.484264   56262 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	ghcr.io/kube-vip/kube-vip:v0.7.1
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0505 14:21:08.484284   56262 cache_images.go:84] Images are preloaded, skipping loading
	I0505 14:21:08.484299   56262 kubeadm.go:928] updating node { 192.169.0.51 8443 v1.30.0 docker true true} ...
	I0505 14:21:08.484375   56262 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-671000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 14:21:08.484439   56262 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0505 14:21:08.500967   56262 cni.go:84] Creating CNI manager for ""
	I0505 14:21:08.500979   56262 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0505 14:21:08.500990   56262 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 14:21:08.501005   56262 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.51 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-671000 NodeName:ha-671000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 14:21:08.501088   56262 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-671000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 14:21:08.501113   56262 kube-vip.go:111] generating kube-vip config ...
	I0505 14:21:08.501162   56262 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 14:21:08.513119   56262 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 14:21:08.513193   56262 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 14:21:08.513250   56262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 14:21:08.521487   56262 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 14:21:08.521531   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0505 14:21:08.528952   56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0505 14:21:08.542487   56262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 14:21:08.556157   56262 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0505 14:21:08.570110   56262 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0505 14:21:08.584111   56262 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0505 14:21:08.586992   56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:21:08.596597   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:08.710024   56262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:21:08.724251   56262 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000 for IP: 192.169.0.51
	I0505 14:21:08.724262   56262 certs.go:194] generating shared ca certs ...
	I0505 14:21:08.724272   56262 certs.go:226] acquiring lock for ca certs: {Name:mk4a4c4cb11dfd06f304e9c6007de9e5e149a466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:08.724457   56262 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key
	I0505 14:21:08.724528   56262 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key
	I0505 14:21:08.724539   56262 certs.go:256] generating profile certs ...
	I0505 14:21:08.724648   56262 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key
	I0505 14:21:08.724671   56262 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190
	I0505 14:21:08.724686   56262 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.51 192.169.0.52 192.169.0.53 192.169.0.254]
	I0505 14:21:08.826095   56262 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190 ...
	I0505 14:21:08.826111   56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190: {Name:mk26b58616f2e9bcce56069037dda85d1d8c350c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:08.826754   56262 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190 ...
	I0505 14:21:08.826765   56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190: {Name:mk7fc32008d240a4b7e6cb64bdeb1f596430582b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:08.826983   56262 certs.go:381] copying /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt
	I0505 14:21:08.827192   56262 certs.go:385] copying /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key
	I0505 14:21:08.827434   56262 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key
	I0505 14:21:08.827443   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 14:21:08.827466   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 14:21:08.827487   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 14:21:08.827506   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 14:21:08.827523   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 14:21:08.827541   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 14:21:08.827559   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 14:21:08.827576   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 14:21:08.827667   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem (1338 bytes)
	W0505 14:21:08.827718   56262 certs.go:480] ignoring /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210_empty.pem, impossibly tiny 0 bytes
	I0505 14:21:08.827726   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 14:21:08.827758   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem (1078 bytes)
	I0505 14:21:08.827791   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem (1123 bytes)
	I0505 14:21:08.827822   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem (1679 bytes)
	I0505 14:21:08.827892   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:21:08.827924   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:08.827970   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem -> /usr/share/ca-certificates/54210.pem
	I0505 14:21:08.827988   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /usr/share/ca-certificates/542102.pem
	I0505 14:21:08.828425   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 14:21:08.851250   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 14:21:08.872963   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 14:21:08.895079   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 14:21:08.922893   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0505 14:21:08.953937   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 14:21:08.983911   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 14:21:09.023252   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 14:21:09.070795   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 14:21:09.113576   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem --> /usr/share/ca-certificates/54210.pem (1338 bytes)
	I0505 14:21:09.150037   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /usr/share/ca-certificates/542102.pem (1708 bytes)
	I0505 14:21:09.170089   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 14:21:09.184262   56262 ssh_runner.go:195] Run: openssl version
	I0505 14:21:09.188637   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/542102.pem && ln -fs /usr/share/ca-certificates/542102.pem /etc/ssl/certs/542102.pem"
	I0505 14:21:09.197186   56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/542102.pem
	I0505 14:21:09.200763   56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:08 /usr/share/ca-certificates/542102.pem
	I0505 14:21:09.200802   56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/542102.pem
	I0505 14:21:09.205113   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/542102.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 14:21:09.213846   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 14:21:09.222459   56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:09.225992   56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:09.226036   56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:09.230212   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 14:21:09.238744   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54210.pem && ln -fs /usr/share/ca-certificates/54210.pem /etc/ssl/certs/54210.pem"
	I0505 14:21:09.247131   56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54210.pem
	I0505 14:21:09.250641   56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:08 /usr/share/ca-certificates/54210.pem
	I0505 14:21:09.250684   56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54210.pem
	I0505 14:21:09.254933   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/54210.pem /etc/ssl/certs/51391683.0"
	I0505 14:21:09.263283   56262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 14:21:09.266913   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 14:21:09.271690   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 14:21:09.276202   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 14:21:09.280723   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 14:21:09.285120   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 14:21:09.289468   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 14:21:09.293767   56262 kubeadm.go:391] StartCluster: {Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false
freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:21:09.293893   56262 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0505 14:21:09.305167   56262 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0505 14:21:09.312937   56262 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0505 14:21:09.312947   56262 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0505 14:21:09.312965   56262 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0505 14:21:09.313010   56262 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0505 14:21:09.320777   56262 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0505 14:21:09.321098   56262 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-671000" does not appear in /Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:21:09.321183   56262 kubeconfig.go:62] /Users/jenkins/minikube-integration/18602-53665/kubeconfig needs updating (will repair): [kubeconfig missing "ha-671000" cluster setting kubeconfig missing "ha-671000" context setting]
	I0505 14:21:09.321347   56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/kubeconfig: {Name:mk07bec02cc3957a2a8800c4412eef88581455ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:09.321996   56262 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:21:09.322179   56262 kapi.go:59] client config for ha-671000: &rest.Config{Host:"https://192.169.0.51:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x6257220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0505 14:21:09.322483   56262 cert_rotation.go:137] Starting client certificate rotation controller
	I0505 14:21:09.322660   56262 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0505 14:21:09.330103   56262 kubeadm.go:624] The running cluster does not require reconfiguration: 192.169.0.51
	I0505 14:21:09.330115   56262 kubeadm.go:591] duration metric: took 17.1285ms to restartPrimaryControlPlane
	I0505 14:21:09.330120   56262 kubeadm.go:393] duration metric: took 36.320628ms to StartCluster
	I0505 14:21:09.330127   56262 settings.go:142] acquiring lock: {Name:mk42961bbb846d74d4f3eb396c3a07b16222feb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:09.330217   56262 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:21:09.330637   56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/kubeconfig: {Name:mk07bec02cc3957a2a8800c4412eef88581455ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:09.330863   56262 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:21:09.330875   56262 start.go:240] waiting for startup goroutines ...
	I0505 14:21:09.330887   56262 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0505 14:21:09.373046   56262 out.go:177] * Enabled addons: 
	I0505 14:21:09.331023   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:21:09.395270   56262 addons.go:510] duration metric: took 64.318856ms for enable addons: enabled=[]
	I0505 14:21:09.395388   56262 start.go:245] waiting for cluster config update ...
	I0505 14:21:09.395406   56262 start.go:254] writing updated cluster config ...
	I0505 14:21:09.418289   56262 out.go:177] 
	I0505 14:21:09.439589   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:21:09.439723   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:21:09.462158   56262 out.go:177] * Starting "ha-671000-m02" control-plane node in "ha-671000" cluster
	I0505 14:21:09.504016   56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:21:09.504076   56262 cache.go:56] Caching tarball of preloaded images
	I0505 14:21:09.504246   56262 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0505 14:21:09.504264   56262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:21:09.504398   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:21:09.505447   56262 start.go:360] acquireMachinesLock for ha-671000-m02: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:21:09.505557   56262 start.go:364] duration metric: took 85.865µs to acquireMachinesLock for "ha-671000-m02"
	I0505 14:21:09.505582   56262 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:21:09.505589   56262 fix.go:54] fixHost starting: m02
	I0505 14:21:09.506042   56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:21:09.506080   56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:21:09.515413   56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57896
	I0505 14:21:09.515746   56262 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:21:09.516119   56262 main.go:141] libmachine: Using API Version  1
	I0505 14:21:09.516136   56262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:21:09.516414   56262 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:21:09.516555   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:09.516655   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetState
	I0505 14:21:09.516736   56262 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:21:09.516805   56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid from json: 56210
	I0505 14:21:09.517744   56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid 56210 missing from process table
	I0505 14:21:09.517764   56262 fix.go:112] recreateIfNeeded on ha-671000-m02: state=Stopped err=<nil>
	I0505 14:21:09.517774   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	W0505 14:21:09.517855   56262 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:21:09.539362   56262 out.go:177] * Restarting existing hyperkit VM for "ha-671000-m02" ...
	I0505 14:21:09.581177   56262 main.go:141] libmachine: (ha-671000-m02) Calling .Start
	I0505 14:21:09.581513   56262 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:21:09.581582   56262 main.go:141] libmachine: (ha-671000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid
	I0505 14:21:09.583319   56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid 56210 missing from process table
	I0505 14:21:09.583336   56262 main.go:141] libmachine: (ha-671000-m02) DBG | pid 56210 is in state "Stopped"
	I0505 14:21:09.583361   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid...
	I0505 14:21:09.583762   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Using UUID 294bfc97-3e6f-4d68-b3f3-54381951a5e8
	I0505 14:21:09.611765   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Generated MAC 92:83:2c:36:f7:7d
	I0505 14:21:09.611789   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
	I0505 14:21:09.611924   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"294bfc97-3e6f-4d68-b3f3-54381951a5e8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b3e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:21:09.611964   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"294bfc97-3e6f-4d68-b3f3-54381951a5e8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b3e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:21:09.612015   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "294bfc97-3e6f-4d68-b3f3-54381951a5e8", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/ha-671000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/
machines/ha-671000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
	I0505 14:21:09.612064   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 294bfc97-3e6f-4d68-b3f3-54381951a5e8 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/ha-671000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
	I0505 14:21:09.612079   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0505 14:21:09.613498   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Pid is 56285
	I0505 14:21:09.613935   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Attempt 0
	I0505 14:21:09.613949   56262 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:21:09.614012   56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid from json: 56285
	I0505 14:21:09.615713   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Searching for 92:83:2c:36:f7:7d in /var/db/dhcpd_leases ...
	I0505 14:21:09.615841   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0505 14:21:09.615860   56262 main.go:141] libmachine: (ha-671000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x663949ba}
	I0505 14:21:09.615883   56262 main.go:141] libmachine: (ha-671000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x6637f817}
	I0505 14:21:09.615897   56262 main.go:141] libmachine: (ha-671000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394976}
	I0505 14:21:09.615905   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Found match: 92:83:2c:36:f7:7d
	I0505 14:21:09.615916   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetConfigRaw
	I0505 14:21:09.615920   56262 main.go:141] libmachine: (ha-671000-m02) DBG | IP: 192.169.0.52
	I0505 14:21:09.616579   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
	I0505 14:21:09.616779   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:21:09.617318   56262 machine.go:94] provisionDockerMachine start ...
	I0505 14:21:09.617329   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:09.617443   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:09.617536   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:09.617633   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:09.617737   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:09.617836   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:09.617968   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:09.618123   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:09.618132   56262 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 14:21:09.621348   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0505 14:21:09.630281   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0505 14:21:09.631193   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:21:09.631218   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:21:09.631230   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:21:09.631252   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:21:10.019586   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0505 14:21:10.019603   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0505 14:21:10.134248   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:21:10.134266   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:21:10.134281   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:21:10.134292   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:21:10.135185   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0505 14:21:10.135199   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0505 14:21:15.419942   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0505 14:21:15.419970   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0505 14:21:15.419978   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0505 14:21:15.445269   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0505 14:21:20.698093   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 14:21:20.698110   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetMachineName
	I0505 14:21:20.698266   56262 buildroot.go:166] provisioning hostname "ha-671000-m02"
	I0505 14:21:20.698277   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetMachineName
	I0505 14:21:20.698366   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:20.698443   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:20.698518   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:20.698602   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:20.698696   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:20.698824   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:20.698977   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:20.698987   56262 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671000-m02 && echo "ha-671000-m02" | sudo tee /etc/hostname
	I0505 14:21:20.773304   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000-m02
	
	I0505 14:21:20.773319   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:20.773451   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:20.773547   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:20.773625   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:20.773710   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:20.773837   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:20.773989   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:20.774000   56262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 14:21:20.846506   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 14:21:20.846523   56262 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
	I0505 14:21:20.846532   56262 buildroot.go:174] setting up certificates
	I0505 14:21:20.846537   56262 provision.go:84] configureAuth start
	I0505 14:21:20.846545   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetMachineName
	I0505 14:21:20.846678   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
	I0505 14:21:20.846753   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:20.846822   56262 provision.go:143] copyHostCerts
	I0505 14:21:20.846847   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:21:20.846900   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
	I0505 14:21:20.846906   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:21:20.847106   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
	I0505 14:21:20.847298   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:21:20.847327   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
	I0505 14:21:20.847332   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:21:20.847414   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
	I0505 14:21:20.847555   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:21:20.847584   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
	I0505 14:21:20.847588   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:21:20.847657   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
	I0505 14:21:20.847808   56262 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000-m02 san=[127.0.0.1 192.169.0.52 ha-671000-m02 localhost minikube]
	I0505 14:21:20.923054   56262 provision.go:177] copyRemoteCerts
	I0505 14:21:20.923102   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 14:21:20.923114   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:20.923242   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:20.923344   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:20.923432   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:20.923508   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	I0505 14:21:20.963007   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 14:21:20.963079   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 14:21:20.982214   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 14:21:20.982293   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0505 14:21:21.001587   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 14:21:21.001658   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 14:21:21.020765   56262 provision.go:87] duration metric: took 174.141582ms to configureAuth
	I0505 14:21:21.020780   56262 buildroot.go:189] setting minikube options for container-runtime
	I0505 14:21:21.020945   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:21:21.020958   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:21.021085   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:21.021186   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:21.021280   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:21.021382   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:21.021493   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:21.021630   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:21.021764   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:21.021777   56262 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 14:21:21.088593   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 14:21:21.088605   56262 buildroot.go:70] root file system type: tmpfs
	I0505 14:21:21.088686   56262 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 14:21:21.088698   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:21.088827   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:21.088944   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:21.089047   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:21.089155   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:21.089299   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:21.089434   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:21.089481   56262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.51"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 14:21:21.165319   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.51
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 14:21:21.165336   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:21.165469   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:21.165561   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:21.165660   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:21.165755   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:21.165892   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:21.166034   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:21.166046   56262 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 14:21:22.810399   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0505 14:21:22.810414   56262 machine.go:97] duration metric: took 13.184745912s to provisionDockerMachine
	I0505 14:21:22.810422   56262 start.go:293] postStartSetup for "ha-671000-m02" (driver="hyperkit")
	I0505 14:21:22.810435   56262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 14:21:22.810448   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:22.810630   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 14:21:22.810642   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:22.810731   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:22.810813   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:22.810958   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:22.811059   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	I0505 14:21:22.854108   56262 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 14:21:22.857587   56262 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 14:21:22.857599   56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
	I0505 14:21:22.857687   56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
	I0505 14:21:22.857827   56262 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
	I0505 14:21:22.857833   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
	I0505 14:21:22.857984   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 14:21:22.870076   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:21:22.896680   56262 start.go:296] duration metric: took 86.209325ms for postStartSetup
	I0505 14:21:22.896713   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:22.896900   56262 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0505 14:21:22.896916   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:22.897010   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:22.897116   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:22.897207   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:22.897282   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	I0505 14:21:22.937842   56262 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0505 14:21:22.937898   56262 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0505 14:21:22.971365   56262 fix.go:56] duration metric: took 13.45726146s for fixHost
	I0505 14:21:22.971396   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:22.971537   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:22.971639   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:22.971717   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:22.971804   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:22.971961   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:22.972106   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:22.972117   56262 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0505 14:21:23.038093   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944083.052286945
	
	I0505 14:21:23.038109   56262 fix.go:216] guest clock: 1714944083.052286945
	I0505 14:21:23.038115   56262 fix.go:229] Guest: 2024-05-05 14:21:23.052286945 -0700 PDT Remote: 2024-05-05 14:21:22.971379 -0700 PDT m=+34.042274957 (delta=80.907945ms)
	I0505 14:21:23.038125   56262 fix.go:200] guest clock delta is within tolerance: 80.907945ms
	I0505 14:21:23.038129   56262 start.go:83] releasing machines lock for "ha-671000-m02", held for 13.524025366s
	I0505 14:21:23.038145   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:23.038286   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
	I0505 14:21:23.061518   56262 out.go:177] * Found network options:
	I0505 14:21:23.083843   56262 out.go:177]   - NO_PROXY=192.169.0.51
	W0505 14:21:23.105432   56262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 14:21:23.105470   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:23.106334   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:23.106599   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:23.106711   56262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 14:21:23.106753   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	W0505 14:21:23.106918   56262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 14:21:23.107013   56262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0505 14:21:23.107023   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:23.107033   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:23.107244   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:23.107275   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:23.107414   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:23.107468   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:23.107556   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:23.107590   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	I0505 14:21:23.107700   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	W0505 14:21:23.143066   56262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 14:21:23.143128   56262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 14:21:23.312270   56262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 14:21:23.312288   56262 start.go:494] detecting cgroup driver to use...
	I0505 14:21:23.312377   56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:21:23.327567   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0505 14:21:23.336186   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 14:21:23.344528   56262 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 14:21:23.344575   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 14:21:23.352890   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:21:23.361005   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 14:21:23.369046   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:21:23.377280   56262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 14:21:23.385827   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 14:21:23.394012   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 14:21:23.402113   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 14:21:23.410536   56262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 14:21:23.418126   56262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 14:21:23.425500   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:23.526138   56262 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 14:21:23.544818   56262 start.go:494] detecting cgroup driver to use...
	I0505 14:21:23.544892   56262 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 14:21:23.559895   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:21:23.572081   56262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 14:21:23.584840   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:21:23.595478   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:21:23.606028   56262 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0505 14:21:23.632278   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:21:23.643848   56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:21:23.658675   56262 ssh_runner.go:195] Run: which cri-dockerd
	I0505 14:21:23.661665   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 14:21:23.669850   56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 14:21:23.683220   56262 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 14:21:23.786303   56262 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 14:21:23.893788   56262 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 14:21:23.893809   56262 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 14:21:23.908293   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:24.010074   56262 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:21:26.298709   56262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.287835945s)
	I0505 14:21:26.298771   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0505 14:21:26.310190   56262 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0505 14:21:26.324652   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:21:26.336377   56262 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0505 14:21:26.435974   56262 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0505 14:21:26.534723   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:26.647643   56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0505 14:21:26.661375   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:21:26.672706   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:26.778709   56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0505 14:21:26.840618   56262 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0505 14:21:26.840697   56262 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0505 14:21:26.844919   56262 start.go:562] Will wait 60s for crictl version
	I0505 14:21:26.844974   56262 ssh_runner.go:195] Run: which crictl
	I0505 14:21:26.849165   56262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 14:21:26.874329   56262 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0505 14:21:26.874403   56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:21:26.890208   56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:21:26.929797   56262 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0505 14:21:26.949648   56262 out.go:177]   - env NO_PROXY=192.169.0.51
	I0505 14:21:26.970782   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
	I0505 14:21:26.971166   56262 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0505 14:21:26.975958   56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:21:26.985550   56262 mustload.go:65] Loading cluster: ha-671000
	I0505 14:21:26.985727   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:21:26.985939   56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:21:26.985954   56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:21:26.994516   56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57918
	I0505 14:21:26.994869   56262 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:21:26.995203   56262 main.go:141] libmachine: Using API Version  1
	I0505 14:21:26.995220   56262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:21:26.995417   56262 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:21:26.995536   56262 main.go:141] libmachine: (ha-671000) Calling .GetState
	I0505 14:21:26.995629   56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:21:26.995703   56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 56275
	I0505 14:21:26.996652   56262 host.go:66] Checking if "ha-671000" exists ...
	I0505 14:21:26.996892   56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:21:26.996917   56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:21:27.005463   56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57920
	I0505 14:21:27.005786   56262 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:21:27.006124   56262 main.go:141] libmachine: Using API Version  1
	I0505 14:21:27.006142   56262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:21:27.006378   56262 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:21:27.006493   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:27.006597   56262 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000 for IP: 192.169.0.52
	I0505 14:21:27.006603   56262 certs.go:194] generating shared ca certs ...
	I0505 14:21:27.006614   56262 certs.go:226] acquiring lock for ca certs: {Name:mk4a4c4cb11dfd06f304e9c6007de9e5e149a466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:27.006755   56262 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key
	I0505 14:21:27.006813   56262 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key
	I0505 14:21:27.006821   56262 certs.go:256] generating profile certs ...
	I0505 14:21:27.006913   56262 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key
	I0505 14:21:27.006999   56262 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e823369f
	I0505 14:21:27.007048   56262 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key
	I0505 14:21:27.007055   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 14:21:27.007075   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 14:21:27.007095   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 14:21:27.007113   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 14:21:27.007130   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 14:21:27.007151   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 14:21:27.007170   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 14:21:27.007187   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 14:21:27.007262   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem (1338 bytes)
	W0505 14:21:27.007299   56262 certs.go:480] ignoring /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210_empty.pem, impossibly tiny 0 bytes
	I0505 14:21:27.007308   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 14:21:27.007341   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem (1078 bytes)
	I0505 14:21:27.007375   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem (1123 bytes)
	I0505 14:21:27.007408   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem (1679 bytes)
	I0505 14:21:27.007476   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:21:27.007517   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /usr/share/ca-certificates/542102.pem
	I0505 14:21:27.007538   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:27.007556   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem -> /usr/share/ca-certificates/54210.pem
	I0505 14:21:27.007581   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:27.007663   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:27.007746   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:27.007820   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:27.007907   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:21:27.036107   56262 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0505 14:21:27.039382   56262 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0505 14:21:27.047195   56262 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0505 14:21:27.050362   56262 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0505 14:21:27.058524   56262 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0505 14:21:27.061585   56262 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0505 14:21:27.069461   56262 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0505 14:21:27.072439   56262 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0505 14:21:27.080982   56262 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0505 14:21:27.084070   56262 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0505 14:21:27.092062   56262 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0505 14:21:27.095149   56262 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0505 14:21:27.103105   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 14:21:27.123887   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 14:21:27.144018   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 14:21:27.164034   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 14:21:27.183960   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0505 14:21:27.204170   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 14:21:27.224085   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 14:21:27.244379   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 14:21:27.264411   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /usr/share/ca-certificates/542102.pem (1708 bytes)
	I0505 14:21:27.283983   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 14:21:27.303697   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem --> /usr/share/ca-certificates/54210.pem (1338 bytes)
	I0505 14:21:27.323613   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0505 14:21:27.337907   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0505 14:21:27.351842   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0505 14:21:27.365462   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0505 14:21:27.379337   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0505 14:21:27.393337   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0505 14:21:27.406867   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0505 14:21:27.420462   56262 ssh_runner.go:195] Run: openssl version
	I0505 14:21:27.425063   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/542102.pem && ln -fs /usr/share/ca-certificates/542102.pem /etc/ssl/certs/542102.pem"
	I0505 14:21:27.433747   56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/542102.pem
	I0505 14:21:27.437275   56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:08 /usr/share/ca-certificates/542102.pem
	I0505 14:21:27.437314   56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/542102.pem
	I0505 14:21:27.441663   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/542102.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 14:21:27.450070   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 14:21:27.458559   56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:27.462027   56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:27.462088   56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:27.466402   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 14:21:27.474903   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54210.pem && ln -fs /usr/share/ca-certificates/54210.pem /etc/ssl/certs/54210.pem"
	I0505 14:21:27.484026   56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54210.pem
	I0505 14:21:27.487471   56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:08 /usr/share/ca-certificates/54210.pem
	I0505 14:21:27.487506   56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54210.pem
	I0505 14:21:27.491806   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/54210.pem /etc/ssl/certs/51391683.0"
	I0505 14:21:27.500356   56262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 14:21:27.503912   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 14:21:27.508255   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 14:21:27.512583   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 14:21:27.516997   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 14:21:27.521261   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 14:21:27.525514   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 14:21:27.529849   56262 kubeadm.go:928] updating node {m02 192.169.0.52 8443 v1.30.0 docker true true} ...
	I0505 14:21:27.529904   56262 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-671000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 14:21:27.529918   56262 kube-vip.go:111] generating kube-vip config ...
	I0505 14:21:27.529952   56262 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 14:21:27.542376   56262 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 14:21:27.542421   56262 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 14:21:27.542477   56262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 14:21:27.550208   56262 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 14:21:27.550254   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0505 14:21:27.557751   56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0505 14:21:27.571295   56262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 14:21:27.584791   56262 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0505 14:21:27.598438   56262 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0505 14:21:27.601396   56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:21:27.610834   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:27.705062   56262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:21:27.720000   56262 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:21:27.761967   56262 out.go:177] * Verifying Kubernetes components...
	I0505 14:21:27.720191   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:21:27.783193   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:27.916127   56262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:21:27.937011   56262 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:21:27.937198   56262 kapi.go:59] client config for ha-671000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x6257220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0505 14:21:27.937233   56262 kubeadm.go:477] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.51:8443
	I0505 14:21:27.937400   56262 node_ready.go:35] waiting up to 6m0s for node "ha-671000-m02" to be "Ready" ...
	I0505 14:21:27.937478   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:27.937483   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:27.937491   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:27.937495   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.141758   56262 round_trippers.go:574] Response Status: 200 OK in 9202 milliseconds
	I0505 14:21:37.151494   56262 node_ready.go:49] node "ha-671000-m02" has status "Ready":"True"
	I0505 14:21:37.151510   56262 node_ready.go:38] duration metric: took 9.212150687s for node "ha-671000-m02" to be "Ready" ...
	I0505 14:21:37.151520   56262 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 14:21:37.151577   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:21:37.151583   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.151590   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.151594   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.191750   56262 round_trippers.go:574] Response Status: 200 OK in 40 milliseconds
	I0505 14:21:37.198443   56262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.198500   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:21:37.198504   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.198511   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.198515   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.209480   56262 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0505 14:21:37.210158   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:37.210166   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.210172   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.210175   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.218742   56262 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0505 14:21:37.219086   56262 pod_ready.go:92] pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:37.219096   56262 pod_ready.go:81] duration metric: took 20.63356ms for pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.219105   56262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.219148   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kjf54
	I0505 14:21:37.219153   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.219162   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.219170   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.221463   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:37.221880   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:37.221889   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.221897   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.221905   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.226727   56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:21:37.227035   56262 pod_ready.go:92] pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:37.227045   56262 pod_ready.go:81] duration metric: took 7.931899ms for pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.227052   56262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.227120   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000
	I0505 14:21:37.227125   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.227131   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.227135   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.228755   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:37.229130   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:37.229137   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.229143   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.229147   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.230595   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:37.230887   56262 pod_ready.go:92] pod "etcd-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:37.230895   56262 pod_ready.go:81] duration metric: took 3.837029ms for pod "etcd-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.230901   56262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.230929   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000-m02
	I0505 14:21:37.230934   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.230939   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.230943   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.232448   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:37.232868   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:37.232875   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.232880   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.232887   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.234369   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:37.234695   56262 pod_ready.go:92] pod "etcd-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:37.234704   56262 pod_ready.go:81] duration metric: took 3.797599ms for pod "etcd-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.234710   56262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.234742   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000-m03
	I0505 14:21:37.234747   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.234753   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.234760   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.236183   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:37.351671   56262 request.go:629] Waited for 115.086464ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:21:37.351703   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:21:37.351742   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.351749   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.351752   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.353285   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:37.353602   56262 pod_ready.go:92] pod "etcd-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:37.353612   56262 pod_ready.go:81] duration metric: took 118.878942ms for pod "etcd-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.353624   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.551816   56262 request.go:629] Waited for 198.124765ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000
	I0505 14:21:37.551893   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000
	I0505 14:21:37.551900   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.551906   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.551909   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.554076   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:37.753242   56262 request.go:629] Waited for 198.55091ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:37.753343   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:37.753355   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.753365   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.753371   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.756033   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:37.756647   56262 pod_ready.go:92] pod "kube-apiserver-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:37.756662   56262 pod_ready.go:81] duration metric: took 402.967586ms for pod "kube-apiserver-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.756670   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.952604   56262 request.go:629] Waited for 195.869842ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m02
	I0505 14:21:37.952645   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m02
	I0505 14:21:37.952654   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.952662   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.952668   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.954903   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:38.151783   56262 request.go:629] Waited for 196.293382ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:38.151830   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:38.151837   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:38.151842   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:38.151847   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:38.156373   56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:21:38.156768   56262 pod_ready.go:92] pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:38.156778   56262 pod_ready.go:81] duration metric: took 400.046736ms for pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:38.156785   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:38.351807   56262 request.go:629] Waited for 194.95401ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m03
	I0505 14:21:38.351854   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m03
	I0505 14:21:38.351862   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:38.351904   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:38.351908   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:38.354097   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:38.552842   56262 request.go:629] Waited for 198.080217ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:21:38.552968   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:21:38.552980   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:38.552990   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:38.552997   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:38.555719   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:38.556135   56262 pod_ready.go:92] pod "kube-apiserver-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:38.556146   56262 pod_ready.go:81] duration metric: took 399.298154ms for pod "kube-apiserver-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:38.556153   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:38.752061   56262 request.go:629] Waited for 195.828299ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000
	I0505 14:21:38.752126   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000
	I0505 14:21:38.752135   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:38.752148   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:38.752158   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:38.754957   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:38.951929   56262 request.go:629] Waited for 196.315529ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:38.951959   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:38.951964   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:38.951969   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:38.951973   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:38.953886   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:38.954275   56262 pod_ready.go:92] pod "kube-controller-manager-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:38.954284   56262 pod_ready.go:81] duration metric: took 398.072724ms for pod "kube-controller-manager-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:38.954297   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:39.151925   56262 request.go:629] Waited for 197.547759ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:39.152007   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:39.152019   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:39.152025   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:39.152029   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:39.157962   56262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0505 14:21:39.352575   56262 request.go:629] Waited for 194.147234ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:39.352619   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:39.352625   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:39.352631   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:39.352635   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:39.356708   56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:21:39.553301   56262 request.go:629] Waited for 97.737035ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:39.553334   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:39.553340   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:39.553346   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:39.553351   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:39.555371   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:39.752052   56262 request.go:629] Waited for 196.251955ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:39.752134   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:39.752145   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:39.752153   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:39.752158   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:39.754627   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:39.955025   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:39.955059   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:39.955067   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:39.955072   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:39.956871   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:40.152049   56262 request.go:629] Waited for 194.641301ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:40.152132   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:40.152171   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:40.152184   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:40.152191   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:40.154660   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:40.456022   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:40.456041   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:40.456050   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:40.456056   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:40.458617   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:40.552124   56262 request.go:629] Waited for 92.99221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:40.552206   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:40.552212   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:40.552220   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:40.552225   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:40.554220   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:40.956144   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:40.956162   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:40.956168   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:40.956172   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:40.958759   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:40.959215   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:40.959223   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:40.959229   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:40.959232   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:40.960907   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:40.961228   56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:41.455646   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:41.455689   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:41.455698   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:41.455722   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:41.457872   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:41.458331   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:41.458339   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:41.458344   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:41.458355   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:41.460082   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:41.955474   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:41.955516   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:41.955524   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:41.955528   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:41.957597   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:41.958178   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:41.958186   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:41.958190   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:41.958193   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:41.960269   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:42.454954   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:42.454969   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:42.454975   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:42.454978   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:42.456939   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:42.457382   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:42.457390   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:42.457395   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:42.457398   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:42.459026   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:42.955443   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:42.955465   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:42.955493   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:42.955500   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:42.957908   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:42.958355   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:42.958362   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:42.958368   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:42.958371   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:42.959853   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:43.455723   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:43.455776   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:43.455798   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:43.455806   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:43.458560   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:43.458997   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:43.459004   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:43.459009   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:43.459013   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:43.460509   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:43.460811   56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:43.955429   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:43.955470   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:43.955481   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:43.955487   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:43.957836   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:43.958298   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:43.958305   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:43.958310   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:43.958320   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:43.960083   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:44.455061   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:44.455081   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:44.455088   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:44.455091   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:44.458998   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:44.459504   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:44.459511   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:44.459517   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:44.459521   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:44.461518   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:44.956537   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:44.956577   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:44.956598   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:44.956604   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:44.959253   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:44.959715   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:44.959723   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:44.959729   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:44.959733   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:44.961411   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:45.455377   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:45.455402   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:45.455414   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:45.455420   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:45.458080   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:45.458718   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:45.458729   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:45.458736   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:45.458752   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:45.463742   56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:21:45.464348   56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:45.955580   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:45.955620   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:45.955630   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:45.955635   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:45.957968   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:45.958442   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:45.958449   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:45.958455   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:45.958466   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:45.959999   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:46.457118   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:46.457136   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:46.457145   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:46.457149   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:46.459543   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:46.460023   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:46.460031   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:46.460036   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:46.460047   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:46.461647   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:46.956302   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:46.956318   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:46.956324   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:46.956326   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:46.958416   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:46.958859   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:46.958866   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:46.958872   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:46.958874   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:46.960501   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:47.456753   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:47.456797   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:47.456806   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:47.456812   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:47.458891   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:47.459328   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:47.459336   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:47.459342   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:47.459345   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:47.460911   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:47.955503   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:47.955545   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:47.955558   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:47.955564   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:47.959575   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:47.960158   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:47.960166   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:47.960171   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:47.960175   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:47.961799   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:47.962164   56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:48.456730   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:48.456747   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:48.456753   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:48.456757   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:48.460539   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:48.461047   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:48.461055   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:48.461061   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:48.461064   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:48.465508   56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:21:48.465989   56262 pod_ready.go:92] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:48.465998   56262 pod_ready.go:81] duration metric: took 9.510763792s for pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:48.466006   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:48.466042   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m03
	I0505 14:21:48.466047   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:48.466052   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:48.466055   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:48.472370   56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 14:21:48.473005   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:21:48.473012   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:48.473017   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:48.473020   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:48.481996   56262 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0505 14:21:48.482501   56262 pod_ready.go:92] pod "kube-controller-manager-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:48.482510   56262 pod_ready.go:81] duration metric: took 16.497528ms for pod "kube-controller-manager-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:48.482517   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5jwqs" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:48.482551   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:48.482556   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:48.482561   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:48.482565   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:48.490468   56262 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0505 14:21:48.491138   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:48.491145   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:48.491151   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:48.491155   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:48.494380   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:48.983087   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:49.004024   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:49.004031   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:49.004035   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:49.006380   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:49.007016   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:49.007024   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:49.007030   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:49.007033   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:49.008914   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:49.483919   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:49.483931   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:49.483938   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:49.483941   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:49.486104   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:49.486673   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:49.486681   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:49.486687   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:49.486691   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:49.488609   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:49.983081   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:49.983096   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:49.983104   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:49.983108   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:49.985873   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:49.986420   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:49.986428   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:49.986434   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:49.986437   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:49.988349   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:50.482957   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:50.482970   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:50.482976   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:50.482980   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:50.485479   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:50.485920   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:50.485927   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:50.485934   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:50.485938   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:50.487720   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:50.488107   56262 pod_ready.go:102] pod "kube-proxy-5jwqs" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:50.983210   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:50.983225   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:50.983232   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:50.983236   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:50.986255   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:50.986840   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:50.986849   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:50.986855   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:50.986866   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:50.989948   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:51.483355   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:51.483374   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:51.483388   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:51.483395   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:51.486820   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:51.487280   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:51.487287   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:51.487293   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:51.487297   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:51.489325   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:51.983090   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:51.983105   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:51.983112   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:51.983115   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:51.984988   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:51.985393   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:51.985401   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:51.985405   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:51.985410   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:51.986930   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:52.484493   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:52.484507   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:52.484516   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:52.484521   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:52.487250   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:52.487686   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:52.487694   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:52.487698   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:52.487702   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:52.489501   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:52.489895   56262 pod_ready.go:102] pod "kube-proxy-5jwqs" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:52.983025   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:52.983048   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:52.983059   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:52.983066   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:52.986110   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:52.986621   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:52.986629   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:52.986634   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:52.986639   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:52.988098   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:53.484742   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:53.484762   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:53.484773   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:53.484779   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:53.488010   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:53.488477   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:53.488487   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:53.488495   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:53.488501   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:53.490598   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:53.982981   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:54.035555   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.035577   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.035582   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.038056   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:54.038420   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:54.038427   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.038431   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.038436   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.040740   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:54.483231   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:54.483250   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.483259   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.483268   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.486904   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:54.487432   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:54.487440   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.487445   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.487453   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.489085   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:54.489450   56262 pod_ready.go:92] pod "kube-proxy-5jwqs" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:54.489459   56262 pod_ready.go:81] duration metric: took 6.006607245s for pod "kube-proxy-5jwqs" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:54.489472   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b45s6" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:54.489506   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b45s6
	I0505 14:21:54.489511   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.489516   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.489520   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.491341   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:54.492125   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m04
	I0505 14:21:54.492155   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.492161   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.492166   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.494017   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:54.494387   56262 pod_ready.go:92] pod "kube-proxy-b45s6" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:54.494395   56262 pod_ready.go:81] duration metric: took 4.917824ms for pod "kube-proxy-b45s6" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:54.494401   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kppdj" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:54.494436   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:54.494441   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.494447   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.494452   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.496166   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:54.496620   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:54.496627   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.496633   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.496637   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.498306   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:54.996074   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:54.996123   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.996136   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.996145   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.999201   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:54.999706   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:54.999714   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.999720   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.999724   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:55.001519   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:55.495423   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:55.495482   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:55.495494   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:55.495500   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:55.498280   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:55.498730   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:55.498738   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:55.498744   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:55.498748   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:55.500462   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:55.995317   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:55.995337   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:55.995349   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:55.995356   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:55.998789   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:55.999222   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:55.999231   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:55.999238   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:55.999241   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:56.001041   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:56.494888   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:56.494946   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:56.494958   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:56.494968   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:56.497790   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:56.498347   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:56.498358   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:56.498365   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:56.498371   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:56.500278   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:56.500656   56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:56.994875   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:56.994892   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:56.994900   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:56.994906   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:56.998618   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:56.999206   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:56.999214   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:56.999220   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:56.999223   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:57.000855   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:57.495334   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:57.495358   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:57.495370   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:57.495375   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:57.498502   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:57.498951   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:57.498958   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:57.498963   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:57.498966   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:57.500746   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:57.995520   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:57.995543   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:57.995579   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:57.995598   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:57.998407   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:57.998972   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:57.998979   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:57.998985   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:57.999001   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:58.000625   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:58.495031   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:58.495049   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:58.495061   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:58.495067   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:58.498099   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:58.498667   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:58.498677   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:58.498685   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:58.498691   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:58.500315   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:58.995219   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:59.001733   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:59.001744   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:59.001750   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:59.004276   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:59.004776   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:59.004783   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:59.004788   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:59.004792   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:59.006346   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:59.006731   56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:59.495209   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:59.495224   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:59.495243   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:59.495269   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:59.498470   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:59.498897   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:59.498905   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:59.498911   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:59.498915   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:59.501440   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:59.995151   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:59.995179   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:59.995191   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:59.995198   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:59.998453   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:59.999020   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:59.999031   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:59.999039   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:59.999043   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:00.000983   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:00.495135   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:00.495148   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:00.495154   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:00.495158   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:00.498254   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:22:00.499175   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:00.499184   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:00.499190   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:00.499193   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:00.501895   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:00.995194   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:00.995216   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:00.995229   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:00.995237   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:00.998468   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:22:00.998920   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:00.998926   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:00.998932   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:00.998935   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:01.000600   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:01.494835   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:01.494860   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:01.494871   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:01.494877   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:01.497889   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:22:01.498547   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:01.498554   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:01.498558   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:01.498561   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:01.500447   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:01.500751   56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
	I0505 14:22:01.996453   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:01.996472   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:01.996483   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:01.996490   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:01.999407   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:01.999918   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:01.999925   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:01.999931   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:01.999934   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:02.001706   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:02.495361   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:02.495382   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:02.495393   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:02.495400   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:02.498902   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:22:02.499504   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:02.499511   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:02.499517   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:02.499521   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:02.501049   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:02.995527   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:02.995548   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:02.995559   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:02.995565   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:02.998530   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:02.998981   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:02.998988   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:02.998994   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:02.998999   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:03.000798   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:03.495714   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:03.495730   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:03.495737   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:03.495741   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:03.498051   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:03.498563   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:03.498571   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:03.498576   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:03.498588   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:03.500374   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:03.995061   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:04.002434   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:04.002442   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:04.002447   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:04.004861   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:04.005402   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:04.005409   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:04.005415   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:04.005418   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:04.011753   56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 14:22:04.012403   56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
	I0505 14:22:04.494873   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:04.494893   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:04.494902   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:04.494906   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:04.497460   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:04.497938   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:04.497946   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:04.497951   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:04.497960   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:04.499356   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:04.995159   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:04.995178   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:04.995188   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:04.995195   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:04.998687   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:22:04.999335   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:04.999342   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:04.999348   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:04.999353   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.000905   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.494984   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:05.494997   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.495003   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.495007   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.497333   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:05.497727   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:05.497735   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.497741   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.497744   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.499501   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.500069   56262 pod_ready.go:92] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"True"
	I0505 14:22:05.500079   56262 pod_ready.go:81] duration metric: took 11.005361676s for pod "kube-proxy-kppdj" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.500095   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zwgd2" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.500132   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zwgd2
	I0505 14:22:05.500137   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.500142   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.500146   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.502320   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:05.502750   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:22:05.502757   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.502763   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.502767   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.504769   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.505126   56262 pod_ready.go:92] pod "kube-proxy-zwgd2" in "kube-system" namespace has status "Ready":"True"
	I0505 14:22:05.505135   56262 pod_ready.go:81] duration metric: took 5.036025ms for pod "kube-proxy-zwgd2" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.505142   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.505179   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000
	I0505 14:22:05.505184   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.505189   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.505194   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.507083   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.507461   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:05.507468   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.507473   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.507477   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.509224   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.509709   56262 pod_ready.go:92] pod "kube-scheduler-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:22:05.509724   56262 pod_ready.go:81] duration metric: took 4.57068ms for pod "kube-scheduler-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.509732   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.509767   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000-m02
	I0505 14:22:05.509771   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.509777   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.509780   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.511597   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.511989   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:22:05.511996   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.512000   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.512010   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.514080   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:05.514548   56262 pod_ready.go:92] pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:22:05.514556   56262 pod_ready.go:81] duration metric: took 4.819427ms for pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.514563   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.514599   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000-m03
	I0505 14:22:05.514603   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.514609   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.514612   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.516436   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.516907   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:22:05.516914   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.516919   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.516923   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.519043   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:05.519280   56262 pod_ready.go:92] pod "kube-scheduler-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 14:22:05.519288   56262 pod_ready.go:81] duration metric: took 4.719804ms for pod "kube-scheduler-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.519294   56262 pod_ready.go:38] duration metric: took 28.365933714s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 14:22:05.519320   56262 api_server.go:52] waiting for apiserver process to appear ...
	I0505 14:22:05.519375   56262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:22:05.533426   56262 api_server.go:72] duration metric: took 37.809561996s to wait for apiserver process to appear ...
	I0505 14:22:05.533438   56262 api_server.go:88] waiting for apiserver healthz status ...
	I0505 14:22:05.533454   56262 api_server.go:253] Checking apiserver healthz at https://192.169.0.51:8443/healthz ...
	I0505 14:22:05.537141   56262 api_server.go:279] https://192.169.0.51:8443/healthz returned 200:
	ok
	I0505 14:22:05.537173   56262 round_trippers.go:463] GET https://192.169.0.51:8443/version
	I0505 14:22:05.537183   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.537191   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.537195   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.537884   56262 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0505 14:22:05.538028   56262 api_server.go:141] control plane version: v1.30.0
	I0505 14:22:05.538038   56262 api_server.go:131] duration metric: took 4.594882ms to wait for apiserver health ...
	I0505 14:22:05.538049   56262 system_pods.go:43] waiting for kube-system pods to appear ...
	I0505 14:22:05.696401   56262 request.go:629] Waited for 158.305976ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:22:05.696517   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:22:05.696529   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.696539   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.696547   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.703009   56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 14:22:05.708412   56262 system_pods.go:59] 26 kube-system pods found
	I0505 14:22:05.708432   56262 system_pods.go:61] "coredns-7db6d8ff4d-hqtd2" [e76b43f2-8189-4e5d-adc3-ced554e9ee07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0505 14:22:05.708439   56262 system_pods.go:61] "coredns-7db6d8ff4d-kjf54" [c780145e-9d82-4451-94e8-dee09a63eadb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0505 14:22:05.708445   56262 system_pods.go:61] "etcd-ha-671000" [ea35bd5e-5a34-48e9-a9b7-4b200b88fb13] Running
	I0505 14:22:05.708448   56262 system_pods.go:61] "etcd-ha-671000-m02" [15f721f6-9618-44f4-9160-dbf9f0a41f73] Running
	I0505 14:22:05.708451   56262 system_pods.go:61] "etcd-ha-671000-m03" [67d2962f-d3f7-42d2-8334-cc42cb3ca5a5] Running
	I0505 14:22:05.708458   56262 system_pods.go:61] "kindnet-cbt9x" [c35bdc79-4b12-4822-ae38-767c7d16c96a] Running
	I0505 14:22:05.708462   56262 system_pods.go:61] "kindnet-ffg2p" [043d485d-6127-4de3-9e4f-cfbf554fa987] Running
	I0505 14:22:05.708464   56262 system_pods.go:61] "kindnet-kn94d" [863e615e-f22a-4d15-8510-4f5c7a42b8cd] Running
	I0505 14:22:05.708468   56262 system_pods.go:61] "kindnet-zvz9x" [17260177-9933-46e9-85d2-86fe51806c25] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0505 14:22:05.708471   56262 system_pods.go:61] "kube-apiserver-ha-671000" [a6f2585c-aec7-4ba9-aa78-e22c55a798ea] Running
	I0505 14:22:05.708474   56262 system_pods.go:61] "kube-apiserver-ha-671000-m02" [bbb10014-5b6a-4377-8e62-a26e6925ee07] Running
	I0505 14:22:05.708477   56262 system_pods.go:61] "kube-apiserver-ha-671000-m03" [83168cbb-d4d0-4793-9f59-1c8cd4c2616f] Running
	I0505 14:22:05.708482   56262 system_pods.go:61] "kube-controller-manager-ha-671000" [9f4e9073-99da-4ed5-8b5f-72106d630807] Running
	I0505 14:22:05.708487   56262 system_pods.go:61] "kube-controller-manager-ha-671000-m02" [074a2d58-b5a5-4fd7-8c03-c1a357ee0c4f] Running
	I0505 14:22:05.708489   56262 system_pods.go:61] "kube-controller-manager-ha-671000-m03" [c7fa4cb4-20a0-431f-8f1a-fb9bf2f0d702] Running
	I0505 14:22:05.708493   56262 system_pods.go:61] "kube-proxy-5jwqs" [72f1cbf9-ca3e-4354-a8f8-7239c77af74a] Running
	I0505 14:22:05.708495   56262 system_pods.go:61] "kube-proxy-b45s6" [4d403d96-8102-44d7-a76f-3a64f30a7132] Running
	I0505 14:22:05.708497   56262 system_pods.go:61] "kube-proxy-kppdj" [5b47d66e-31b1-4892-85ef-0c3ad3bec4cb] Running
	I0505 14:22:05.708500   56262 system_pods.go:61] "kube-proxy-zwgd2" [e87cf8e2-923f-499e-a740-60cd8b02b805] Running
	I0505 14:22:05.708502   56262 system_pods.go:61] "kube-scheduler-ha-671000" [1ae249c2-7cd6-4c14-80aa-1d88d491dfc2] Running
	I0505 14:22:05.708505   56262 system_pods.go:61] "kube-scheduler-ha-671000-m02" [27dd9d5f-8e2a-4597-b743-0c79fa5df5b1] Running
	I0505 14:22:05.708507   56262 system_pods.go:61] "kube-scheduler-ha-671000-m03" [0c85a6ad-ae3e-4895-8a7f-f36385b1eb0b] Running
	I0505 14:22:05.708510   56262 system_pods.go:61] "kube-vip-ha-671000" [dcc7956d-0333-45ed-afff-b8429485ef9a] Running
	I0505 14:22:05.708512   56262 system_pods.go:61] "kube-vip-ha-671000-m02" [9a09b965-61cb-4026-9bcd-0daa29f18c86] Running
	I0505 14:22:05.708515   56262 system_pods.go:61] "kube-vip-ha-671000-m03" [4866dc28-b7e1-4387-8e4a-cf819f426faa] Running
	I0505 14:22:05.708520   56262 system_pods.go:61] "storage-provisioner" [f376315c-5f9b-46f4-b295-6d7d025063bc] Running
	I0505 14:22:05.708525   56262 system_pods.go:74] duration metric: took 170.469417ms to wait for pod list to return data ...
	I0505 14:22:05.708531   56262 default_sa.go:34] waiting for default service account to be created ...
	I0505 14:22:05.897069   56262 request.go:629] Waited for 188.474109ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/default/serviceaccounts
	I0505 14:22:05.897179   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/default/serviceaccounts
	I0505 14:22:05.897186   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.897194   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.897199   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.950188   56262 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0505 14:22:05.950392   56262 default_sa.go:45] found service account: "default"
	I0505 14:22:05.950405   56262 default_sa.go:55] duration metric: took 241.864725ms for default service account to be created ...
	I0505 14:22:05.950412   56262 system_pods.go:116] waiting for k8s-apps to be running ...
	I0505 14:22:06.095263   56262 request.go:629] Waited for 144.804696ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:22:06.095366   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:22:06.095376   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:06.095388   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:06.095395   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:06.102144   56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 14:22:06.107768   56262 system_pods.go:86] 26 kube-system pods found
	I0505 14:22:06.107783   56262 system_pods.go:89] "coredns-7db6d8ff4d-hqtd2" [e76b43f2-8189-4e5d-adc3-ced554e9ee07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0505 14:22:06.107794   56262 system_pods.go:89] "coredns-7db6d8ff4d-kjf54" [c780145e-9d82-4451-94e8-dee09a63eadb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0505 14:22:06.107800   56262 system_pods.go:89] "etcd-ha-671000" [ea35bd5e-5a34-48e9-a9b7-4b200b88fb13] Running
	I0505 14:22:06.107803   56262 system_pods.go:89] "etcd-ha-671000-m02" [15f721f6-9618-44f4-9160-dbf9f0a41f73] Running
	I0505 14:22:06.107808   56262 system_pods.go:89] "etcd-ha-671000-m03" [67d2962f-d3f7-42d2-8334-cc42cb3ca5a5] Running
	I0505 14:22:06.107811   56262 system_pods.go:89] "kindnet-cbt9x" [c35bdc79-4b12-4822-ae38-767c7d16c96a] Running
	I0505 14:22:06.107815   56262 system_pods.go:89] "kindnet-ffg2p" [043d485d-6127-4de3-9e4f-cfbf554fa987] Running
	I0505 14:22:06.107818   56262 system_pods.go:89] "kindnet-kn94d" [863e615e-f22a-4d15-8510-4f5c7a42b8cd] Running
	I0505 14:22:06.107823   56262 system_pods.go:89] "kindnet-zvz9x" [17260177-9933-46e9-85d2-86fe51806c25] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0505 14:22:06.107826   56262 system_pods.go:89] "kube-apiserver-ha-671000" [a6f2585c-aec7-4ba9-aa78-e22c55a798ea] Running
	I0505 14:22:06.107831   56262 system_pods.go:89] "kube-apiserver-ha-671000-m02" [bbb10014-5b6a-4377-8e62-a26e6925ee07] Running
	I0505 14:22:06.107834   56262 system_pods.go:89] "kube-apiserver-ha-671000-m03" [83168cbb-d4d0-4793-9f59-1c8cd4c2616f] Running
	I0505 14:22:06.107838   56262 system_pods.go:89] "kube-controller-manager-ha-671000" [9f4e9073-99da-4ed5-8b5f-72106d630807] Running
	I0505 14:22:06.107842   56262 system_pods.go:89] "kube-controller-manager-ha-671000-m02" [074a2d58-b5a5-4fd7-8c03-c1a357ee0c4f] Running
	I0505 14:22:06.107847   56262 system_pods.go:89] "kube-controller-manager-ha-671000-m03" [c7fa4cb4-20a0-431f-8f1a-fb9bf2f0d702] Running
	I0505 14:22:06.107854   56262 system_pods.go:89] "kube-proxy-5jwqs" [72f1cbf9-ca3e-4354-a8f8-7239c77af74a] Running
	I0505 14:22:06.107862   56262 system_pods.go:89] "kube-proxy-b45s6" [4d403d96-8102-44d7-a76f-3a64f30a7132] Running
	I0505 14:22:06.107866   56262 system_pods.go:89] "kube-proxy-kppdj" [5b47d66e-31b1-4892-85ef-0c3ad3bec4cb] Running
	I0505 14:22:06.107869   56262 system_pods.go:89] "kube-proxy-zwgd2" [e87cf8e2-923f-499e-a740-60cd8b02b805] Running
	I0505 14:22:06.107874   56262 system_pods.go:89] "kube-scheduler-ha-671000" [1ae249c2-7cd6-4c14-80aa-1d88d491dfc2] Running
	I0505 14:22:06.107877   56262 system_pods.go:89] "kube-scheduler-ha-671000-m02" [27dd9d5f-8e2a-4597-b743-0c79fa5df5b1] Running
	I0505 14:22:06.107887   56262 system_pods.go:89] "kube-scheduler-ha-671000-m03" [0c85a6ad-ae3e-4895-8a7f-f36385b1eb0b] Running
	I0505 14:22:06.107890   56262 system_pods.go:89] "kube-vip-ha-671000" [dcc7956d-0333-45ed-afff-b8429485ef9a] Running
	I0505 14:22:06.107894   56262 system_pods.go:89] "kube-vip-ha-671000-m02" [9a09b965-61cb-4026-9bcd-0daa29f18c86] Running
	I0505 14:22:06.107897   56262 system_pods.go:89] "kube-vip-ha-671000-m03" [4866dc28-b7e1-4387-8e4a-cf819f426faa] Running
	I0505 14:22:06.107900   56262 system_pods.go:89] "storage-provisioner" [f376315c-5f9b-46f4-b295-6d7d025063bc] Running
	I0505 14:22:06.107905   56262 system_pods.go:126] duration metric: took 157.48572ms to wait for k8s-apps to be running ...
	I0505 14:22:06.107910   56262 system_svc.go:44] waiting for kubelet service to be running ....
	I0505 14:22:06.107954   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 14:22:06.119916   56262 system_svc.go:56] duration metric: took 12.002036ms WaitForService to wait for kubelet
	I0505 14:22:06.119930   56262 kubeadm.go:576] duration metric: took 38.396059047s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:22:06.119941   56262 node_conditions.go:102] verifying NodePressure condition ...
	I0505 14:22:06.295252   56262 request.go:629] Waited for 175.271788ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes
	I0505 14:22:06.295332   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes
	I0505 14:22:06.295338   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:06.295345   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:06.295350   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:06.299820   56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:22:06.300760   56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 14:22:06.300774   56262 node_conditions.go:123] node cpu capacity is 2
	I0505 14:22:06.300783   56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 14:22:06.300787   56262 node_conditions.go:123] node cpu capacity is 2
	I0505 14:22:06.300791   56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 14:22:06.300794   56262 node_conditions.go:123] node cpu capacity is 2
	I0505 14:22:06.300797   56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 14:22:06.300801   56262 node_conditions.go:123] node cpu capacity is 2
	I0505 14:22:06.300804   56262 node_conditions.go:105] duration metric: took 180.85639ms to run NodePressure ...
	I0505 14:22:06.300811   56262 start.go:240] waiting for startup goroutines ...
	I0505 14:22:06.300829   56262 start.go:254] writing updated cluster config ...
	I0505 14:22:06.322636   56262 out.go:177] 
	I0505 14:22:06.343913   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:22:06.344042   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:22:06.366539   56262 out.go:177] * Starting "ha-671000-m03" control-plane node in "ha-671000" cluster
	I0505 14:22:06.408466   56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:22:06.408493   56262 cache.go:56] Caching tarball of preloaded images
	I0505 14:22:06.408686   56262 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0505 14:22:06.408703   56262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:22:06.408834   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:22:06.409908   56262 start.go:360] acquireMachinesLock for ha-671000-m03: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:22:06.409993   56262 start.go:364] duration metric: took 67.566µs to acquireMachinesLock for "ha-671000-m03"
	I0505 14:22:06.410011   56262 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:22:06.410016   56262 fix.go:54] fixHost starting: m03
	I0505 14:22:06.410315   56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:22:06.410333   56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:22:06.419592   56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57925
	I0505 14:22:06.419993   56262 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:22:06.420359   56262 main.go:141] libmachine: Using API Version  1
	I0505 14:22:06.420375   56262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:22:06.420588   56262 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:22:06.420701   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:06.420780   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetState
	I0505 14:22:06.420862   56262 main.go:141] libmachine: (ha-671000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:22:06.420955   56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid from json: 55740
	I0505 14:22:06.421873   56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid 55740 missing from process table
	I0505 14:22:06.421938   56262 fix.go:112] recreateIfNeeded on ha-671000-m03: state=Stopped err=<nil>
	I0505 14:22:06.421958   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	W0505 14:22:06.422054   56262 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:22:06.443498   56262 out.go:177] * Restarting existing hyperkit VM for "ha-671000-m03" ...
	I0505 14:22:06.485588   56262 main.go:141] libmachine: (ha-671000-m03) Calling .Start
	I0505 14:22:06.485823   56262 main.go:141] libmachine: (ha-671000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:22:06.485876   56262 main.go:141] libmachine: (ha-671000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid
	I0505 14:22:06.487603   56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid 55740 missing from process table
	I0505 14:22:06.487617   56262 main.go:141] libmachine: (ha-671000-m03) DBG | pid 55740 is in state "Stopped"
	I0505 14:22:06.487633   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid...
	I0505 14:22:06.488242   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Using UUID be90591f-7869-4905-ae38-2f481381ca7c
	I0505 14:22:06.514163   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Generated MAC ce:17:a:56:1e:f8
	I0505 14:22:06.514197   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
	I0505 14:22:06.514318   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"be90591f-7869-4905-ae38-2f481381ca7c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003be9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:22:06.514365   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"be90591f-7869-4905-ae38-2f481381ca7c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003be9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:22:06.514413   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "be90591f-7869-4905-ae38-2f481381ca7c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/ha-671000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/
machines/ha-671000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
	I0505 14:22:06.514460   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U be90591f-7869-4905-ae38-2f481381ca7c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/ha-671000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
	I0505 14:22:06.514470   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0505 14:22:06.515957   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Pid is 56300
	I0505 14:22:06.516349   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Attempt 0
	I0505 14:22:06.516370   56262 main.go:141] libmachine: (ha-671000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:22:06.516444   56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid from json: 56300
	I0505 14:22:06.518246   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Searching for ce:17:a:56:1e:f8 in /var/db/dhcpd_leases ...
	I0505 14:22:06.518360   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0505 14:22:06.518376   56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x663949ce}
	I0505 14:22:06.518417   56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x663949ba}
	I0505 14:22:06.518433   56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x6637f817}
	I0505 14:22:06.518449   56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x663948d2}
	I0505 14:22:06.518457   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Found match: ce:17:a:56:1e:f8
	I0505 14:22:06.518467   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetConfigRaw
	I0505 14:22:06.518473   56262 main.go:141] libmachine: (ha-671000-m03) DBG | IP: 192.169.0.53
	I0505 14:22:06.519132   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetIP
	I0505 14:22:06.519357   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:22:06.519808   56262 machine.go:94] provisionDockerMachine start ...
	I0505 14:22:06.519818   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:06.519942   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:06.520079   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:06.520182   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:06.520284   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:06.520381   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:06.520488   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:06.520648   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:06.520655   56262 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 14:22:06.524407   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0505 14:22:06.532556   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0505 14:22:06.533607   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:22:06.533622   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:22:06.533633   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:22:06.533644   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:22:06.917916   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0505 14:22:06.917942   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0505 14:22:07.032632   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:22:07.032653   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:22:07.032677   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:22:07.032689   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:22:07.033533   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0505 14:22:07.033546   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0505 14:22:12.402771   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0505 14:22:12.402786   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0505 14:22:12.402806   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0505 14:22:12.426606   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0505 14:22:41.581350   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 14:22:41.581367   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetMachineName
	I0505 14:22:41.581506   56262 buildroot.go:166] provisioning hostname "ha-671000-m03"
	I0505 14:22:41.581517   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetMachineName
	I0505 14:22:41.581600   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.581683   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:41.581781   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.581875   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.581960   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:41.582100   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:41.582238   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:41.582247   56262 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671000-m03 && echo "ha-671000-m03" | sudo tee /etc/hostname
	I0505 14:22:41.647083   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000-m03
	
	I0505 14:22:41.647098   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.647232   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:41.647343   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.647430   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.647521   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:41.647657   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:41.647849   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:41.647862   56262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 14:22:41.709306   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 14:22:41.709326   56262 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
	I0505 14:22:41.709344   56262 buildroot.go:174] setting up certificates
	I0505 14:22:41.709357   56262 provision.go:84] configureAuth start
	I0505 14:22:41.709363   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetMachineName
	I0505 14:22:41.709499   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetIP
	I0505 14:22:41.709593   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.709680   56262 provision.go:143] copyHostCerts
	I0505 14:22:41.709715   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:22:41.709786   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
	I0505 14:22:41.709792   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:22:41.709937   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
	I0505 14:22:41.710168   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:22:41.710212   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
	I0505 14:22:41.710217   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:22:41.710297   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
	I0505 14:22:41.710445   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:22:41.710490   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
	I0505 14:22:41.710497   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:22:41.710575   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
	I0505 14:22:41.710718   56262 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000-m03 san=[127.0.0.1 192.169.0.53 ha-671000-m03 localhost minikube]
	I0505 14:22:41.753782   56262 provision.go:177] copyRemoteCerts
	I0505 14:22:41.753842   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 14:22:41.753857   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.753999   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:41.754106   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.754195   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:41.754274   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
	I0505 14:22:41.788993   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 14:22:41.789066   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 14:22:41.808008   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 14:22:41.808084   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0505 14:22:41.828147   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 14:22:41.828228   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 14:22:41.848543   56262 provision.go:87] duration metric: took 139.178952ms to configureAuth
	I0505 14:22:41.848558   56262 buildroot.go:189] setting minikube options for container-runtime
	I0505 14:22:41.848732   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:22:41.848746   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:41.848890   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.848974   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:41.849066   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.849145   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.849226   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:41.849346   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:41.849468   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:41.849476   56262 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 14:22:41.905134   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 14:22:41.905147   56262 buildroot.go:70] root file system type: tmpfs
	I0505 14:22:41.905226   56262 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 14:22:41.905236   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.905372   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:41.905459   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.905559   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.905645   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:41.905773   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:41.905913   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:41.905965   56262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.51"
	Environment="NO_PROXY=192.169.0.51,192.169.0.52"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 14:22:41.971506   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.51
	Environment=NO_PROXY=192.169.0.51,192.169.0.52
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 14:22:41.971532   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.971667   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:41.971753   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.971832   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.971919   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:41.972061   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:41.972206   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:41.972218   56262 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 14:22:43.586757   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0505 14:22:43.586772   56262 machine.go:97] duration metric: took 37.066967123s to provisionDockerMachine
	I0505 14:22:43.586795   56262 start.go:293] postStartSetup for "ha-671000-m03" (driver="hyperkit")
	I0505 14:22:43.586804   56262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 14:22:43.586816   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:43.587008   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 14:22:43.587022   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:43.587109   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:43.587250   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:43.587368   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:43.587470   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
	I0505 14:22:43.621728   56262 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 14:22:43.624913   56262 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 14:22:43.624927   56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
	I0505 14:22:43.625027   56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
	I0505 14:22:43.625208   56262 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
	I0505 14:22:43.625215   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
	I0505 14:22:43.625422   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 14:22:43.632883   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:22:43.652930   56262 start.go:296] duration metric: took 66.125789ms for postStartSetup
	I0505 14:22:43.652964   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:43.653131   56262 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0505 14:22:43.653145   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:43.653240   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:43.653328   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:43.653413   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:43.653505   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
	I0505 14:22:43.687474   56262 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0505 14:22:43.687532   56262 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0505 14:22:43.719424   56262 fix.go:56] duration metric: took 37.309414657s for fixHost
	I0505 14:22:43.719447   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:43.719581   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:43.719680   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:43.719767   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:43.719859   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:43.719991   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:43.720140   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:43.720147   56262 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0505 14:22:43.777003   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944163.917671963
	
	I0505 14:22:43.777016   56262 fix.go:216] guest clock: 1714944163.917671963
	I0505 14:22:43.777022   56262 fix.go:229] Guest: 2024-05-05 14:22:43.917671963 -0700 PDT Remote: 2024-05-05 14:22:43.719438 -0700 PDT m=+114.784889102 (delta=198.233963ms)
	I0505 14:22:43.777033   56262 fix.go:200] guest clock delta is within tolerance: 198.233963ms
	I0505 14:22:43.777036   56262 start.go:83] releasing machines lock for "ha-671000-m03", held for 37.367046714s
	I0505 14:22:43.777054   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:43.777184   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetIP
	I0505 14:22:43.798458   56262 out.go:177] * Found network options:
	I0505 14:22:43.818375   56262 out.go:177]   - NO_PROXY=192.169.0.51,192.169.0.52
	W0505 14:22:43.839196   56262 proxy.go:119] fail to check proxy env: Error ip not in block
	W0505 14:22:43.839212   56262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 14:22:43.839223   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:43.839636   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:43.839763   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:43.839847   56262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 14:22:43.839883   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	W0505 14:22:43.839885   56262 proxy.go:119] fail to check proxy env: Error ip not in block
	W0505 14:22:43.839898   56262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 14:22:43.839953   56262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0505 14:22:43.839970   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:43.839989   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:43.840065   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:43.840123   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:43.840188   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:43.840221   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:43.840303   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
	I0505 14:22:43.840332   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:43.840420   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
	W0505 14:22:43.919168   56262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 14:22:43.919245   56262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 14:22:43.936501   56262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 14:22:43.936515   56262 start.go:494] detecting cgroup driver to use...
	I0505 14:22:43.936582   56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:22:43.953774   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0505 14:22:43.963068   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 14:22:43.972111   56262 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 14:22:43.972163   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 14:22:43.981147   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:22:44.011701   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 14:22:44.020897   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:22:44.030143   56262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 14:22:44.039491   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 14:22:44.048778   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 14:22:44.057937   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 14:22:44.067298   56262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 14:22:44.075698   56262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 14:22:44.083983   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:22:44.200980   56262 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 14:22:44.219877   56262 start.go:494] detecting cgroup driver to use...
	I0505 14:22:44.219946   56262 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 14:22:44.236639   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:22:44.254367   56262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 14:22:44.271268   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:22:44.282915   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:22:44.293466   56262 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0505 14:22:44.317181   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:22:44.327878   56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:22:44.343024   56262 ssh_runner.go:195] Run: which cri-dockerd
	I0505 14:22:44.346054   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 14:22:44.353257   56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 14:22:44.367082   56262 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 14:22:44.465180   56262 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 14:22:44.569600   56262 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 14:22:44.569629   56262 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 14:22:44.584431   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:22:44.680947   56262 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:23:45.736510   56262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.056089884s)
	I0505 14:23:45.736595   56262 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0505 14:23:45.770790   56262 out.go:177] 
	W0505 14:23:45.791249   56262 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 05 21:22:41 ha-671000-m03 systemd[1]: Starting Docker Application Container Engine...
	May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.352208248Z" level=info msg="Starting up"
	May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.353022730Z" level=info msg="containerd not running, starting managed containerd"
	May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.358767057Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.373539189Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388000547Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388073973Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388137944Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388171760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388313706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388355785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388477111Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388518957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388551610Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388580389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388726935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388950191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390520791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390570725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390706880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390751886Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390888815Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390940476Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390972496Z" level=info msg="metadata content store policy set" policy=shared
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394800432Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394883868Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394961138Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395000278Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395036706Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395111009Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395337703Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395418767Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395454129Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395484232Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395514263Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395546554Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395576938Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395607440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395641518Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395677040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395708605Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395737963Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395799761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395843188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395874408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395904381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395933636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395965927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395995431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396033716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396067448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396098841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396127871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396155969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396184510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396215668Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396250321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396280045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396307939Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396379697Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396424577Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396475305Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396510849Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396569471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396621386Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396656010Z" level=info msg="NRI interface is disabled by configuration."
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396883316Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396972499Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.397031244Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.397069101Z" level=info msg="containerd successfully booted in 0.024677s"
	May 05 21:22:42 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:42.379929944Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 05 21:22:42 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:42.413119848Z" level=info msg="Loading containers: start."
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.663705690Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.700545709Z" level=info msg="Loading containers: done."
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.707501270Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.707669278Z" level=info msg="Daemon has completed initialization"
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.725886686Z" level=info msg="API listen on [::]:2376"
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.725971765Z" level=info msg="API listen on /var/run/docker.sock"
	May 05 21:22:43 ha-671000-m03 systemd[1]: Started Docker Application Container Engine.
	May 05 21:22:44 ha-671000-m03 systemd[1]: Stopping Docker Application Container Engine...
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.833114404Z" level=info msg="Processing signal 'terminated'"
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834199869Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834666188Z" level=info msg="Daemon shutdown complete"
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834695637Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834707874Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 05 21:22:45 ha-671000-m03 systemd[1]: docker.service: Deactivated successfully.
	May 05 21:22:45 ha-671000-m03 systemd[1]: Stopped Docker Application Container Engine.
	May 05 21:22:45 ha-671000-m03 systemd[1]: Starting Docker Application Container Engine...
	May 05 21:22:45 ha-671000-m03 dockerd[1073]: time="2024-05-05T21:22:45.887265470Z" level=info msg="Starting up"
	May 05 21:23:45 ha-671000-m03 dockerd[1073]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 05 21:23:45 ha-671000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 05 21:23:45 ha-671000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 05 21:23:45 ha-671000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 05 21:22:41 ha-671000-m03 systemd[1]: Starting Docker Application Container Engine...
	May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.352208248Z" level=info msg="Starting up"
	May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.353022730Z" level=info msg="containerd not running, starting managed containerd"
	May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.358767057Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.373539189Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388000547Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388073973Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388137944Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388171760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388313706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388355785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388477111Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388518957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388551610Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388580389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388726935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388950191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390520791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390570725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390706880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390751886Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390888815Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390940476Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390972496Z" level=info msg="metadata content store policy set" policy=shared
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394800432Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394883868Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394961138Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395000278Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395036706Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395111009Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395337703Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395418767Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395454129Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395484232Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395514263Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395546554Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395576938Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395607440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395641518Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395677040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395708605Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395737963Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395799761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395843188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395874408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395904381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395933636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395965927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395995431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396033716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396067448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396098841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396127871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396155969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396184510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396215668Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396250321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396280045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396307939Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396379697Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396424577Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396475305Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396510849Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396569471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396621386Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396656010Z" level=info msg="NRI interface is disabled by configuration."
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396883316Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396972499Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.397031244Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.397069101Z" level=info msg="containerd successfully booted in 0.024677s"
	May 05 21:22:42 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:42.379929944Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 05 21:22:42 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:42.413119848Z" level=info msg="Loading containers: start."
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.663705690Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.700545709Z" level=info msg="Loading containers: done."
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.707501270Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.707669278Z" level=info msg="Daemon has completed initialization"
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.725886686Z" level=info msg="API listen on [::]:2376"
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.725971765Z" level=info msg="API listen on /var/run/docker.sock"
	May 05 21:22:43 ha-671000-m03 systemd[1]: Started Docker Application Container Engine.
	May 05 21:22:44 ha-671000-m03 systemd[1]: Stopping Docker Application Container Engine...
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.833114404Z" level=info msg="Processing signal 'terminated'"
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834199869Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834666188Z" level=info msg="Daemon shutdown complete"
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834695637Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834707874Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 05 21:22:45 ha-671000-m03 systemd[1]: docker.service: Deactivated successfully.
	May 05 21:22:45 ha-671000-m03 systemd[1]: Stopped Docker Application Container Engine.
	May 05 21:22:45 ha-671000-m03 systemd[1]: Starting Docker Application Container Engine...
	May 05 21:22:45 ha-671000-m03 dockerd[1073]: time="2024-05-05T21:22:45.887265470Z" level=info msg="Starting up"
	May 05 21:23:45 ha-671000-m03 dockerd[1073]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 05 21:23:45 ha-671000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 05 21:23:45 ha-671000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 05 21:23:45 ha-671000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0505 14:23:45.791332   56262 out.go:239] * 
	* 
	W0505 14:23:45.791963   56262 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:23:45.854203   56262 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p ha-671000 -v=7 --alsologtostderr" : exit status 90
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-671000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-671000 -n ha-671000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-671000 logs -n 25: (3.107156771s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-671000 cp ha-671000-m03:/home/docker/cp-test.txt                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m02:/home/docker/cp-test_ha-671000-m03_ha-671000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n ha-671000-m02 sudo cat                                                                                      | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | /home/docker/cp-test_ha-671000-m03_ha-671000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-671000 cp ha-671000-m03:/home/docker/cp-test.txt                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04:/home/docker/cp-test_ha-671000-m03_ha-671000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n ha-671000-m04 sudo cat                                                                                      | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | /home/docker/cp-test_ha-671000-m03_ha-671000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-671000 cp testdata/cp-test.txt                                                                                            | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile4235302821/001/cp-test_ha-671000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000:/home/docker/cp-test_ha-671000-m04_ha-671000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n ha-671000 sudo cat                                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | /home/docker/cp-test_ha-671000-m04_ha-671000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m02:/home/docker/cp-test_ha-671000-m04_ha-671000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n ha-671000-m02 sudo cat                                                                                      | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | /home/docker/cp-test_ha-671000-m04_ha-671000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m03:/home/docker/cp-test_ha-671000-m04_ha-671000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n ha-671000-m03 sudo cat                                                                                      | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | /home/docker/cp-test_ha-671000-m04_ha-671000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-671000 node stop m02 -v=7                                                                                                 | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-671000 node start m02 -v=7                                                                                                | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:20 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-671000 -v=7                                                                                                       | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:20 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-671000 -v=7                                                                                                            | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:20 PDT | 05 May 24 14:20 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-671000 --wait=true -v=7                                                                                                | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:20 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-671000                                                                                                            | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:23 PDT |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 14:20:48
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 14:20:48.965096   56262 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:20:48.965304   56262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:20:48.965309   56262 out.go:304] Setting ErrFile to fd 2...
	I0505 14:20:48.965313   56262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:20:48.965501   56262 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
	I0505 14:20:48.966984   56262 out.go:298] Setting JSON to false
	I0505 14:20:48.991851   56262 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":19219,"bootTime":1714924829,"procs":425,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0505 14:20:48.991949   56262 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:20:49.013239   56262 out.go:177] * [ha-671000] minikube v1.33.0 on Darwin 14.4.1
	I0505 14:20:49.055173   56262 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:20:49.055223   56262 notify.go:220] Checking for updates...
	I0505 14:20:49.077109   56262 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:20:49.097964   56262 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0505 14:20:49.119233   56262 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:20:49.139935   56262 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
	I0505 14:20:49.161146   56262 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:20:49.182881   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:20:49.183046   56262 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:20:49.183689   56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:20:49.183764   56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:20:49.193369   56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57871
	I0505 14:20:49.193700   56262 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:20:49.194120   56262 main.go:141] libmachine: Using API Version  1
	I0505 14:20:49.194134   56262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:20:49.194326   56262 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:20:49.194462   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:20:49.223183   56262 out.go:177] * Using the hyperkit driver based on existing profile
	I0505 14:20:49.265211   56262 start.go:297] selected driver: hyperkit
	I0505 14:20:49.265249   56262 start.go:901] validating driver "hyperkit" against &{Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:20:49.265473   56262 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:20:49.265691   56262 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:20:49.265889   56262 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18602-53665/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0505 14:20:49.275605   56262 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0505 14:20:49.280711   56262 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:20:49.280731   56262 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0505 14:20:49.284127   56262 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:20:49.284202   56262 cni.go:84] Creating CNI manager for ""
	I0505 14:20:49.284211   56262 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0505 14:20:49.284292   56262 start.go:340] cluster config:
	{Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false he
lm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:20:49.284394   56262 iso.go:125] acquiring lock: {Name:mk0da19ac8d2d553b5039d86a6857a5ca35625a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:20:49.326088   56262 out.go:177] * Starting "ha-671000" primary control-plane node in "ha-671000" cluster
	I0505 14:20:49.347002   56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:20:49.347074   56262 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0505 14:20:49.347098   56262 cache.go:56] Caching tarball of preloaded images
	I0505 14:20:49.347288   56262 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0505 14:20:49.347306   56262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:20:49.347472   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:20:49.348516   56262 start.go:360] acquireMachinesLock for ha-671000: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:20:49.348656   56262 start.go:364] duration metric: took 99.405µs to acquireMachinesLock for "ha-671000"
	I0505 14:20:49.348707   56262 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:20:49.348726   56262 fix.go:54] fixHost starting: 
	I0505 14:20:49.349125   56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:20:49.349160   56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:20:49.358523   56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57873
	I0505 14:20:49.358884   56262 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:20:49.359279   56262 main.go:141] libmachine: Using API Version  1
	I0505 14:20:49.359298   56262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:20:49.359523   56262 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:20:49.359669   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:20:49.359788   56262 main.go:141] libmachine: (ha-671000) Calling .GetState
	I0505 14:20:49.359894   56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:20:49.359963   56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 55694
	I0505 14:20:49.360866   56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid 55694 missing from process table
	I0505 14:20:49.360926   56262 fix.go:112] recreateIfNeeded on ha-671000: state=Stopped err=<nil>
	I0505 14:20:49.360950   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	W0505 14:20:49.361041   56262 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:20:49.402877   56262 out.go:177] * Restarting existing hyperkit VM for "ha-671000" ...
	I0505 14:20:49.423939   56262 main.go:141] libmachine: (ha-671000) Calling .Start
	I0505 14:20:49.424311   56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:20:49.424354   56262 main.go:141] libmachine: (ha-671000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid
	I0505 14:20:49.426302   56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid 55694 missing from process table
	I0505 14:20:49.426313   56262 main.go:141] libmachine: (ha-671000) DBG | pid 55694 is in state "Stopped"
	I0505 14:20:49.426344   56262 main.go:141] libmachine: (ha-671000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid...
	I0505 14:20:49.426771   56262 main.go:141] libmachine: (ha-671000) DBG | Using UUID 9389e317-b0a3-4e2d-8cc9-aa1a138ddf96
	I0505 14:20:49.551381   56262 main.go:141] libmachine: (ha-671000) DBG | Generated MAC 72:52:a3:7d:5c:d1
	I0505 14:20:49.551411   56262 main.go:141] libmachine: (ha-671000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
	I0505 14:20:49.551646   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9389e317-b0a3-4e2d-8cc9-aa1a138ddf96", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037f290)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:20:49.551692   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9389e317-b0a3-4e2d-8cc9-aa1a138ddf96", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037f290)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:20:49.551780   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9389e317-b0a3-4e2d-8cc9-aa1a138ddf96", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/ha-671000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
	I0505 14:20:49.551846   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9389e317-b0a3-4e2d-8cc9-aa1a138ddf96 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/ha-671000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
	I0505 14:20:49.551864   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0505 14:20:49.553184   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Pid is 56275
	I0505 14:20:49.553639   56262 main.go:141] libmachine: (ha-671000) DBG | Attempt 0
	I0505 14:20:49.553663   56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:20:49.553735   56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 56275
	I0505 14:20:49.555494   56262 main.go:141] libmachine: (ha-671000) DBG | Searching for 72:52:a3:7d:5c:d1 in /var/db/dhcpd_leases ...
	I0505 14:20:49.555595   56262 main.go:141] libmachine: (ha-671000) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0505 14:20:49.555611   56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x6637f817}
	I0505 14:20:49.555629   56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394976}
	I0505 14:20:49.555648   56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x663948d2}
	I0505 14:20:49.555661   56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x66394853}
	I0505 14:20:49.555667   56262 main.go:141] libmachine: (ha-671000) DBG | Found match: 72:52:a3:7d:5c:d1
	I0505 14:20:49.555674   56262 main.go:141] libmachine: (ha-671000) DBG | IP: 192.169.0.51
	I0505 14:20:49.555696   56262 main.go:141] libmachine: (ha-671000) Calling .GetConfigRaw
	I0505 14:20:49.556342   56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
	I0505 14:20:49.556516   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:20:49.556975   56262 machine.go:94] provisionDockerMachine start ...
	I0505 14:20:49.556985   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:20:49.557119   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:20:49.557222   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:20:49.557336   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:20:49.557465   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:20:49.557602   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:20:49.557742   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:20:49.557972   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:20:49.557981   56262 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 14:20:49.561305   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0505 14:20:49.617858   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0505 14:20:49.618520   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:20:49.618541   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:20:49.618548   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:20:49.618556   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:20:50.003923   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0505 14:20:50.003954   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0505 14:20:50.118574   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:20:50.118591   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:20:50.118604   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:20:50.118620   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:20:50.119491   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0505 14:20:50.119502   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0505 14:20:55.386088   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0505 14:20:55.386105   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0505 14:20:55.386124   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0505 14:20:55.410129   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0505 14:20:59.165992   56262 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.51:22: connect: connection refused
	I0505 14:21:02.226047   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 14:21:02.226063   56262 main.go:141] libmachine: (ha-671000) Calling .GetMachineName
	I0505 14:21:02.226198   56262 buildroot.go:166] provisioning hostname "ha-671000"
	I0505 14:21:02.226208   56262 main.go:141] libmachine: (ha-671000) Calling .GetMachineName
	I0505 14:21:02.226303   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.226392   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:02.226492   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.226582   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.226673   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:02.226801   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:02.226937   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:21:02.226945   56262 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671000 && echo "ha-671000" | sudo tee /etc/hostname
	I0505 14:21:02.297369   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000
	
	I0505 14:21:02.297395   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.297543   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:02.297643   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.297751   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.297848   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:02.297983   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:02.298121   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:21:02.298132   56262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 14:21:02.363709   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 14:21:02.363736   56262 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
	I0505 14:21:02.363757   56262 buildroot.go:174] setting up certificates
	I0505 14:21:02.363764   56262 provision.go:84] configureAuth start
	I0505 14:21:02.363771   56262 main.go:141] libmachine: (ha-671000) Calling .GetMachineName
	I0505 14:21:02.363911   56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
	I0505 14:21:02.364012   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.364108   56262 provision.go:143] copyHostCerts
	I0505 14:21:02.364139   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:21:02.364208   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
	I0505 14:21:02.364216   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:21:02.364363   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
	I0505 14:21:02.364576   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:21:02.364616   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
	I0505 14:21:02.364621   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:21:02.364702   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
	I0505 14:21:02.364858   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:21:02.364899   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
	I0505 14:21:02.364904   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:21:02.364979   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
	I0505 14:21:02.365133   56262 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000 san=[127.0.0.1 192.169.0.51 ha-671000 localhost minikube]
	I0505 14:21:02.566783   56262 provision.go:177] copyRemoteCerts
	I0505 14:21:02.566851   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 14:21:02.566867   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.567002   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:02.567081   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.567166   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:02.567249   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:21:02.603993   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 14:21:02.604064   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 14:21:02.623864   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 14:21:02.623931   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0505 14:21:02.642984   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 14:21:02.643054   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0505 14:21:02.662651   56262 provision.go:87] duration metric: took 298.874135ms to configureAuth
	I0505 14:21:02.662663   56262 buildroot.go:189] setting minikube options for container-runtime
	I0505 14:21:02.662832   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:21:02.662845   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:02.662976   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.663065   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:02.663164   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.663269   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.663357   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:02.663467   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:02.663594   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:21:02.663602   56262 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 14:21:02.721847   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 14:21:02.721864   56262 buildroot.go:70] root file system type: tmpfs
	I0505 14:21:02.721944   56262 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 14:21:02.721957   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.722094   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:02.722182   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.722290   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.722379   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:02.722504   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:02.722641   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:21:02.722685   56262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 14:21:02.791477   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 14:21:02.791499   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.791628   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:02.791713   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.791806   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.791895   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:02.792000   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:02.792138   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:21:02.792148   56262 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 14:21:04.463791   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0505 14:21:04.463805   56262 machine.go:97] duration metric: took 14.90688888s to provisionDockerMachine
	I0505 14:21:04.463814   56262 start.go:293] postStartSetup for "ha-671000" (driver="hyperkit")
	I0505 14:21:04.463821   56262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 14:21:04.463832   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:04.464011   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 14:21:04.464034   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:04.464144   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:04.464235   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:04.464343   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:04.464431   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:21:04.510297   56262 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 14:21:04.514333   56262 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 14:21:04.514346   56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
	I0505 14:21:04.514446   56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
	I0505 14:21:04.514637   56262 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
	I0505 14:21:04.514644   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
	I0505 14:21:04.514851   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 14:21:04.528097   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:21:04.557607   56262 start.go:296] duration metric: took 93.785206ms for postStartSetup
	I0505 14:21:04.557630   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:04.557802   56262 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0505 14:21:04.557815   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:04.557914   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:04.558026   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:04.558104   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:04.558180   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:21:04.595384   56262 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0505 14:21:04.595439   56262 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0505 14:21:04.627954   56262 fix.go:56] duration metric: took 15.279298664s for fixHost
	I0505 14:21:04.627978   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:04.628106   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:04.628210   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:04.628316   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:04.628400   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:04.628519   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:04.628664   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:21:04.628671   56262 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 14:21:04.687788   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944064.851392424
	
	I0505 14:21:04.687801   56262 fix.go:216] guest clock: 1714944064.851392424
	I0505 14:21:04.687806   56262 fix.go:229] Guest: 2024-05-05 14:21:04.851392424 -0700 PDT Remote: 2024-05-05 14:21:04.627967 -0700 PDT m=+15.708271847 (delta=223.425424ms)
	I0505 14:21:04.687822   56262 fix.go:200] guest clock delta is within tolerance: 223.425424ms
	I0505 14:21:04.687828   56262 start.go:83] releasing machines lock for "ha-671000", held for 15.339229169s
	I0505 14:21:04.687844   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:04.687975   56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
	I0505 14:21:04.688073   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:04.688362   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:04.688461   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:04.688537   56262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 14:21:04.688563   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:04.688585   56262 ssh_runner.go:195] Run: cat /version.json
	I0505 14:21:04.688594   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:04.688666   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:04.688681   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:04.688776   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:04.688794   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:04.688857   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:04.688870   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:04.688932   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:21:04.688951   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:21:04.773179   56262 ssh_runner.go:195] Run: systemctl --version
	I0505 14:21:04.778074   56262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 14:21:04.782225   56262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 14:21:04.782267   56262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 14:21:04.795505   56262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 14:21:04.795515   56262 start.go:494] detecting cgroup driver to use...
	I0505 14:21:04.795626   56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:21:04.813193   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0505 14:21:04.822043   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 14:21:04.830859   56262 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 14:21:04.830912   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 14:21:04.839650   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:21:04.848348   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 14:21:04.857332   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:21:04.866100   56262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 14:21:04.874955   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 14:21:04.883995   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 14:21:04.892686   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 14:21:04.901641   56262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 14:21:04.909531   56262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 14:21:04.917434   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:05.025345   56262 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 14:21:05.045401   56262 start.go:494] detecting cgroup driver to use...
	I0505 14:21:05.045483   56262 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 14:21:05.056970   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:21:05.067558   56262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 14:21:05.082472   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:21:05.093595   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:21:05.104660   56262 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0505 14:21:05.123434   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:21:05.136644   56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:21:05.151834   56262 ssh_runner.go:195] Run: which cri-dockerd
	I0505 14:21:05.154642   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 14:21:05.162375   56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 14:21:05.175761   56262 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 14:21:05.270844   56262 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 14:21:05.375810   56262 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 14:21:05.375883   56262 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 14:21:05.390245   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:05.495960   56262 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:21:07.797662   56262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.301692609s)
	I0505 14:21:07.797733   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0505 14:21:07.809357   56262 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0505 14:21:07.822066   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:21:07.832350   56262 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0505 14:21:07.930252   56262 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0505 14:21:08.029360   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:08.124190   56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0505 14:21:08.137986   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:21:08.149027   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:08.258895   56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0505 14:21:08.326102   56262 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0505 14:21:08.326177   56262 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0505 14:21:08.330736   56262 start.go:562] Will wait 60s for crictl version
	I0505 14:21:08.330787   56262 ssh_runner.go:195] Run: which crictl
	I0505 14:21:08.333926   56262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 14:21:08.360867   56262 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0505 14:21:08.360957   56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:21:08.380536   56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:21:08.444390   56262 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0505 14:21:08.444441   56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
	I0505 14:21:08.444833   56262 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0505 14:21:08.449245   56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:21:08.459088   56262 kubeadm.go:877] updating cluster {Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:fal
se freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 14:21:08.459178   56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:21:08.459237   56262 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0505 14:21:08.472336   56262 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	ghcr.io/kube-vip/kube-vip:v0.7.1
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0505 14:21:08.472348   56262 docker.go:615] Images already preloaded, skipping extraction
	I0505 14:21:08.472419   56262 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0505 14:21:08.484264   56262 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	ghcr.io/kube-vip/kube-vip:v0.7.1
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0505 14:21:08.484284   56262 cache_images.go:84] Images are preloaded, skipping loading
	I0505 14:21:08.484299   56262 kubeadm.go:928] updating node { 192.169.0.51 8443 v1.30.0 docker true true} ...
	I0505 14:21:08.484375   56262 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-671000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 14:21:08.484439   56262 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0505 14:21:08.500967   56262 cni.go:84] Creating CNI manager for ""
	I0505 14:21:08.500979   56262 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0505 14:21:08.500990   56262 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 14:21:08.501005   56262 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.51 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-671000 NodeName:ha-671000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 14:21:08.501088   56262 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-671000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 14:21:08.501113   56262 kube-vip.go:111] generating kube-vip config ...
	I0505 14:21:08.501162   56262 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 14:21:08.513119   56262 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 14:21:08.513193   56262 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 14:21:08.513250   56262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 14:21:08.521487   56262 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 14:21:08.521531   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0505 14:21:08.528952   56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0505 14:21:08.542487   56262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 14:21:08.556157   56262 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0505 14:21:08.570110   56262 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0505 14:21:08.584111   56262 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0505 14:21:08.586992   56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:21:08.596597   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:08.710024   56262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:21:08.724251   56262 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000 for IP: 192.169.0.51
	I0505 14:21:08.724262   56262 certs.go:194] generating shared ca certs ...
	I0505 14:21:08.724272   56262 certs.go:226] acquiring lock for ca certs: {Name:mk4a4c4cb11dfd06f304e9c6007de9e5e149a466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:08.724457   56262 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key
	I0505 14:21:08.724528   56262 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key
	I0505 14:21:08.724539   56262 certs.go:256] generating profile certs ...
	I0505 14:21:08.724648   56262 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key
	I0505 14:21:08.724671   56262 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190
	I0505 14:21:08.724686   56262 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.51 192.169.0.52 192.169.0.53 192.169.0.254]
	I0505 14:21:08.826095   56262 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190 ...
	I0505 14:21:08.826111   56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190: {Name:mk26b58616f2e9bcce56069037dda85d1d8c350c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:08.826754   56262 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190 ...
	I0505 14:21:08.826765   56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190: {Name:mk7fc32008d240a4b7e6cb64bdeb1f596430582b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:08.826983   56262 certs.go:381] copying /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt
	I0505 14:21:08.827192   56262 certs.go:385] copying /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key
	I0505 14:21:08.827434   56262 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key
	I0505 14:21:08.827443   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 14:21:08.827466   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 14:21:08.827487   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 14:21:08.827506   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 14:21:08.827523   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 14:21:08.827541   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 14:21:08.827559   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 14:21:08.827576   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 14:21:08.827667   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem (1338 bytes)
	W0505 14:21:08.827718   56262 certs.go:480] ignoring /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210_empty.pem, impossibly tiny 0 bytes
	I0505 14:21:08.827726   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 14:21:08.827758   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem (1078 bytes)
	I0505 14:21:08.827791   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem (1123 bytes)
	I0505 14:21:08.827822   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem (1679 bytes)
	I0505 14:21:08.827892   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:21:08.827924   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:08.827970   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem -> /usr/share/ca-certificates/54210.pem
	I0505 14:21:08.827988   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /usr/share/ca-certificates/542102.pem
	I0505 14:21:08.828425   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 14:21:08.851250   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 14:21:08.872963   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 14:21:08.895079   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 14:21:08.922893   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0505 14:21:08.953937   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 14:21:08.983911   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 14:21:09.023252   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 14:21:09.070795   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 14:21:09.113576   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem --> /usr/share/ca-certificates/54210.pem (1338 bytes)
	I0505 14:21:09.150037   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /usr/share/ca-certificates/542102.pem (1708 bytes)
	I0505 14:21:09.170089   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 14:21:09.184262   56262 ssh_runner.go:195] Run: openssl version
	I0505 14:21:09.188637   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/542102.pem && ln -fs /usr/share/ca-certificates/542102.pem /etc/ssl/certs/542102.pem"
	I0505 14:21:09.197186   56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/542102.pem
	I0505 14:21:09.200763   56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:08 /usr/share/ca-certificates/542102.pem
	I0505 14:21:09.200802   56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/542102.pem
	I0505 14:21:09.205113   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/542102.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 14:21:09.213846   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 14:21:09.222459   56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:09.225992   56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:09.226036   56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:09.230212   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 14:21:09.238744   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54210.pem && ln -fs /usr/share/ca-certificates/54210.pem /etc/ssl/certs/54210.pem"
	I0505 14:21:09.247131   56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54210.pem
	I0505 14:21:09.250641   56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:08 /usr/share/ca-certificates/54210.pem
	I0505 14:21:09.250684   56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54210.pem
	I0505 14:21:09.254933   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/54210.pem /etc/ssl/certs/51391683.0"
	I0505 14:21:09.263283   56262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 14:21:09.266913   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 14:21:09.271690   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 14:21:09.276202   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 14:21:09.280723   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 14:21:09.285120   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 14:21:09.289468   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 14:21:09.293767   56262 kubeadm.go:391] StartCluster: {Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false
freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:21:09.293893   56262 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0505 14:21:09.305167   56262 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0505 14:21:09.312937   56262 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0505 14:21:09.312947   56262 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0505 14:21:09.312965   56262 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0505 14:21:09.313010   56262 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0505 14:21:09.320777   56262 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0505 14:21:09.321098   56262 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-671000" does not appear in /Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:21:09.321183   56262 kubeconfig.go:62] /Users/jenkins/minikube-integration/18602-53665/kubeconfig needs updating (will repair): [kubeconfig missing "ha-671000" cluster setting kubeconfig missing "ha-671000" context setting]
	I0505 14:21:09.321347   56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/kubeconfig: {Name:mk07bec02cc3957a2a8800c4412eef88581455ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:09.321996   56262 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:21:09.322179   56262 kapi.go:59] client config for ha-671000: &rest.Config{Host:"https://192.169.0.51:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x6257220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0505 14:21:09.322483   56262 cert_rotation.go:137] Starting client certificate rotation controller
	I0505 14:21:09.322660   56262 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0505 14:21:09.330103   56262 kubeadm.go:624] The running cluster does not require reconfiguration: 192.169.0.51
	I0505 14:21:09.330115   56262 kubeadm.go:591] duration metric: took 17.1285ms to restartPrimaryControlPlane
	I0505 14:21:09.330120   56262 kubeadm.go:393] duration metric: took 36.320628ms to StartCluster
	I0505 14:21:09.330127   56262 settings.go:142] acquiring lock: {Name:mk42961bbb846d74d4f3eb396c3a07b16222feb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:09.330217   56262 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:21:09.330637   56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/kubeconfig: {Name:mk07bec02cc3957a2a8800c4412eef88581455ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:09.330863   56262 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:21:09.330875   56262 start.go:240] waiting for startup goroutines ...
	I0505 14:21:09.330887   56262 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0505 14:21:09.373046   56262 out.go:177] * Enabled addons: 
	I0505 14:21:09.331023   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:21:09.395270   56262 addons.go:510] duration metric: took 64.318856ms for enable addons: enabled=[]
	I0505 14:21:09.395388   56262 start.go:245] waiting for cluster config update ...
	I0505 14:21:09.395406   56262 start.go:254] writing updated cluster config ...
	I0505 14:21:09.418289   56262 out.go:177] 
	I0505 14:21:09.439589   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:21:09.439723   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:21:09.462158   56262 out.go:177] * Starting "ha-671000-m02" control-plane node in "ha-671000" cluster
	I0505 14:21:09.504016   56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:21:09.504076   56262 cache.go:56] Caching tarball of preloaded images
	I0505 14:21:09.504246   56262 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0505 14:21:09.504264   56262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:21:09.504398   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:21:09.505447   56262 start.go:360] acquireMachinesLock for ha-671000-m02: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:21:09.505557   56262 start.go:364] duration metric: took 85.865µs to acquireMachinesLock for "ha-671000-m02"
	I0505 14:21:09.505582   56262 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:21:09.505589   56262 fix.go:54] fixHost starting: m02
	I0505 14:21:09.506042   56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:21:09.506080   56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:21:09.515413   56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57896
	I0505 14:21:09.515746   56262 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:21:09.516119   56262 main.go:141] libmachine: Using API Version  1
	I0505 14:21:09.516136   56262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:21:09.516414   56262 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:21:09.516555   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:09.516655   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetState
	I0505 14:21:09.516736   56262 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:21:09.516805   56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid from json: 56210
	I0505 14:21:09.517744   56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid 56210 missing from process table
	I0505 14:21:09.517764   56262 fix.go:112] recreateIfNeeded on ha-671000-m02: state=Stopped err=<nil>
	I0505 14:21:09.517774   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	W0505 14:21:09.517855   56262 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:21:09.539362   56262 out.go:177] * Restarting existing hyperkit VM for "ha-671000-m02" ...
	I0505 14:21:09.581177   56262 main.go:141] libmachine: (ha-671000-m02) Calling .Start
	I0505 14:21:09.581513   56262 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:21:09.581582   56262 main.go:141] libmachine: (ha-671000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid
	I0505 14:21:09.583319   56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid 56210 missing from process table
	I0505 14:21:09.583336   56262 main.go:141] libmachine: (ha-671000-m02) DBG | pid 56210 is in state "Stopped"
	I0505 14:21:09.583361   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid...
	I0505 14:21:09.583762   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Using UUID 294bfc97-3e6f-4d68-b3f3-54381951a5e8
	I0505 14:21:09.611765   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Generated MAC 92:83:2c:36:f7:7d
	I0505 14:21:09.611789   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
	I0505 14:21:09.611924   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"294bfc97-3e6f-4d68-b3f3-54381951a5e8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b3e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:21:09.611964   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"294bfc97-3e6f-4d68-b3f3-54381951a5e8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b3e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:21:09.612015   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "294bfc97-3e6f-4d68-b3f3-54381951a5e8", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/ha-671000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/
machines/ha-671000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
	I0505 14:21:09.612064   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 294bfc97-3e6f-4d68-b3f3-54381951a5e8 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/ha-671000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
	I0505 14:21:09.612079   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0505 14:21:09.613498   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Pid is 56285
	I0505 14:21:09.613935   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Attempt 0
	I0505 14:21:09.613949   56262 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:21:09.614012   56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid from json: 56285
	I0505 14:21:09.615713   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Searching for 92:83:2c:36:f7:7d in /var/db/dhcpd_leases ...
	I0505 14:21:09.615841   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0505 14:21:09.615860   56262 main.go:141] libmachine: (ha-671000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x663949ba}
	I0505 14:21:09.615883   56262 main.go:141] libmachine: (ha-671000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x6637f817}
	I0505 14:21:09.615897   56262 main.go:141] libmachine: (ha-671000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394976}
	I0505 14:21:09.615905   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Found match: 92:83:2c:36:f7:7d
	I0505 14:21:09.615916   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetConfigRaw
	I0505 14:21:09.615920   56262 main.go:141] libmachine: (ha-671000-m02) DBG | IP: 192.169.0.52
	I0505 14:21:09.616579   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
	I0505 14:21:09.616779   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:21:09.617318   56262 machine.go:94] provisionDockerMachine start ...
	I0505 14:21:09.617329   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:09.617443   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:09.617536   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:09.617633   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:09.617737   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:09.617836   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:09.617968   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:09.618123   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:09.618132   56262 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 14:21:09.621348   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0505 14:21:09.630281   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0505 14:21:09.631193   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:21:09.631218   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:21:09.631230   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:21:09.631252   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:21:10.019586   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0505 14:21:10.019603   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0505 14:21:10.134248   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:21:10.134266   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:21:10.134281   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:21:10.134292   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:21:10.135185   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0505 14:21:10.135199   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0505 14:21:15.419942   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0505 14:21:15.419970   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0505 14:21:15.419978   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0505 14:21:15.445269   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0505 14:21:20.698093   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 14:21:20.698110   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetMachineName
	I0505 14:21:20.698266   56262 buildroot.go:166] provisioning hostname "ha-671000-m02"
	I0505 14:21:20.698277   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetMachineName
	I0505 14:21:20.698366   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:20.698443   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:20.698518   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:20.698602   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:20.698696   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:20.698824   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:20.698977   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:20.698987   56262 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671000-m02 && echo "ha-671000-m02" | sudo tee /etc/hostname
	I0505 14:21:20.773304   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000-m02
	
	I0505 14:21:20.773319   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:20.773451   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:20.773547   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:20.773625   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:20.773710   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:20.773837   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:20.773989   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:20.774000   56262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 14:21:20.846506   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 14:21:20.846523   56262 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
	I0505 14:21:20.846532   56262 buildroot.go:174] setting up certificates
	I0505 14:21:20.846537   56262 provision.go:84] configureAuth start
	I0505 14:21:20.846545   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetMachineName
	I0505 14:21:20.846678   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
	I0505 14:21:20.846753   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:20.846822   56262 provision.go:143] copyHostCerts
	I0505 14:21:20.846847   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:21:20.846900   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
	I0505 14:21:20.846906   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:21:20.847106   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
	I0505 14:21:20.847298   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:21:20.847327   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
	I0505 14:21:20.847332   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:21:20.847414   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
	I0505 14:21:20.847555   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:21:20.847584   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
	I0505 14:21:20.847588   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:21:20.847657   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
	I0505 14:21:20.847808   56262 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000-m02 san=[127.0.0.1 192.169.0.52 ha-671000-m02 localhost minikube]
	I0505 14:21:20.923054   56262 provision.go:177] copyRemoteCerts
	I0505 14:21:20.923102   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 14:21:20.923114   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:20.923242   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:20.923344   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:20.923432   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:20.923508   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	I0505 14:21:20.963007   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 14:21:20.963079   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 14:21:20.982214   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 14:21:20.982293   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0505 14:21:21.001587   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 14:21:21.001658   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 14:21:21.020765   56262 provision.go:87] duration metric: took 174.141582ms to configureAuth
	I0505 14:21:21.020780   56262 buildroot.go:189] setting minikube options for container-runtime
	I0505 14:21:21.020945   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:21:21.020958   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:21.021085   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:21.021186   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:21.021280   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:21.021382   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:21.021493   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:21.021630   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:21.021764   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:21.021777   56262 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 14:21:21.088593   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 14:21:21.088605   56262 buildroot.go:70] root file system type: tmpfs
	I0505 14:21:21.088686   56262 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 14:21:21.088698   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:21.088827   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:21.088944   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:21.089047   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:21.089155   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:21.089299   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:21.089434   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:21.089481   56262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.51"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 14:21:21.165319   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.51
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 14:21:21.165336   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:21.165469   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:21.165561   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:21.165660   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:21.165755   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:21.165892   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:21.166034   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:21.166046   56262 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 14:21:22.810399   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0505 14:21:22.810414   56262 machine.go:97] duration metric: took 13.184745912s to provisionDockerMachine
	I0505 14:21:22.810422   56262 start.go:293] postStartSetup for "ha-671000-m02" (driver="hyperkit")
	I0505 14:21:22.810435   56262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 14:21:22.810448   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:22.810630   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 14:21:22.810642   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:22.810731   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:22.810813   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:22.810958   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:22.811059   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	I0505 14:21:22.854108   56262 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 14:21:22.857587   56262 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 14:21:22.857599   56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
	I0505 14:21:22.857687   56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
	I0505 14:21:22.857827   56262 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
	I0505 14:21:22.857833   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
	I0505 14:21:22.857984   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 14:21:22.870076   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:21:22.896680   56262 start.go:296] duration metric: took 86.209325ms for postStartSetup
	I0505 14:21:22.896713   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:22.896900   56262 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0505 14:21:22.896916   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:22.897010   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:22.897116   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:22.897207   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:22.897282   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	I0505 14:21:22.937842   56262 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0505 14:21:22.937898   56262 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0505 14:21:22.971365   56262 fix.go:56] duration metric: took 13.45726146s for fixHost
	I0505 14:21:22.971396   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:22.971537   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:22.971639   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:22.971717   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:22.971804   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:22.971961   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:22.972106   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:22.972117   56262 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 14:21:23.038093   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944083.052286945
	
	I0505 14:21:23.038109   56262 fix.go:216] guest clock: 1714944083.052286945
	I0505 14:21:23.038115   56262 fix.go:229] Guest: 2024-05-05 14:21:23.052286945 -0700 PDT Remote: 2024-05-05 14:21:22.971379 -0700 PDT m=+34.042274957 (delta=80.907945ms)
	I0505 14:21:23.038125   56262 fix.go:200] guest clock delta is within tolerance: 80.907945ms
	I0505 14:21:23.038129   56262 start.go:83] releasing machines lock for "ha-671000-m02", held for 13.524025366s
	I0505 14:21:23.038145   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:23.038286   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
	I0505 14:21:23.061518   56262 out.go:177] * Found network options:
	I0505 14:21:23.083843   56262 out.go:177]   - NO_PROXY=192.169.0.51
	W0505 14:21:23.105432   56262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 14:21:23.105470   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:23.106334   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:23.106599   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:23.106711   56262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 14:21:23.106753   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	W0505 14:21:23.106918   56262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 14:21:23.107013   56262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0505 14:21:23.107023   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:23.107033   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:23.107244   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:23.107275   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:23.107414   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:23.107468   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:23.107556   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:23.107590   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	I0505 14:21:23.107700   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	W0505 14:21:23.143066   56262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 14:21:23.143128   56262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 14:21:23.312270   56262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 14:21:23.312288   56262 start.go:494] detecting cgroup driver to use...
	I0505 14:21:23.312377   56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:21:23.327567   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0505 14:21:23.336186   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 14:21:23.344528   56262 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 14:21:23.344575   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 14:21:23.352890   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:21:23.361005   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 14:21:23.369046   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:21:23.377280   56262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 14:21:23.385827   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 14:21:23.394012   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 14:21:23.402113   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 14:21:23.410536   56262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 14:21:23.418126   56262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 14:21:23.425500   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:23.526138   56262 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 14:21:23.544818   56262 start.go:494] detecting cgroup driver to use...
	I0505 14:21:23.544892   56262 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 14:21:23.559895   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:21:23.572081   56262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 14:21:23.584840   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:21:23.595478   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:21:23.606028   56262 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0505 14:21:23.632278   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:21:23.643848   56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:21:23.658675   56262 ssh_runner.go:195] Run: which cri-dockerd
	I0505 14:21:23.661665   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 14:21:23.669850   56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 14:21:23.683220   56262 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 14:21:23.786303   56262 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 14:21:23.893788   56262 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 14:21:23.893809   56262 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 14:21:23.908293   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:24.010074   56262 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:21:26.298709   56262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.287835945s)
	I0505 14:21:26.298771   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0505 14:21:26.310190   56262 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0505 14:21:26.324652   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:21:26.336377   56262 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0505 14:21:26.435974   56262 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0505 14:21:26.534723   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:26.647643   56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0505 14:21:26.661375   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:21:26.672706   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:26.778709   56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0505 14:21:26.840618   56262 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0505 14:21:26.840697   56262 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0505 14:21:26.844919   56262 start.go:562] Will wait 60s for crictl version
	I0505 14:21:26.844974   56262 ssh_runner.go:195] Run: which crictl
	I0505 14:21:26.849165   56262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 14:21:26.874329   56262 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0505 14:21:26.874403   56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:21:26.890208   56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:21:26.929797   56262 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0505 14:21:26.949648   56262 out.go:177]   - env NO_PROXY=192.169.0.51
	I0505 14:21:26.970782   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
	I0505 14:21:26.971166   56262 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0505 14:21:26.975958   56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:21:26.985550   56262 mustload.go:65] Loading cluster: ha-671000
	I0505 14:21:26.985727   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:21:26.985939   56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:21:26.985954   56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:21:26.994516   56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57918
	I0505 14:21:26.994869   56262 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:21:26.995203   56262 main.go:141] libmachine: Using API Version  1
	I0505 14:21:26.995220   56262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:21:26.995417   56262 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:21:26.995536   56262 main.go:141] libmachine: (ha-671000) Calling .GetState
	I0505 14:21:26.995629   56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:21:26.995703   56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 56275
	I0505 14:21:26.996652   56262 host.go:66] Checking if "ha-671000" exists ...
	I0505 14:21:26.996892   56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:21:26.996917   56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:21:27.005463   56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57920
	I0505 14:21:27.005786   56262 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:21:27.006124   56262 main.go:141] libmachine: Using API Version  1
	I0505 14:21:27.006142   56262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:21:27.006378   56262 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:21:27.006493   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:27.006597   56262 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000 for IP: 192.169.0.52
	I0505 14:21:27.006603   56262 certs.go:194] generating shared ca certs ...
	I0505 14:21:27.006614   56262 certs.go:226] acquiring lock for ca certs: {Name:mk4a4c4cb11dfd06f304e9c6007de9e5e149a466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:27.006755   56262 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key
	I0505 14:21:27.006813   56262 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key
	I0505 14:21:27.006821   56262 certs.go:256] generating profile certs ...
	I0505 14:21:27.006913   56262 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key
	I0505 14:21:27.006999   56262 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e823369f
	I0505 14:21:27.007048   56262 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key
	I0505 14:21:27.007055   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 14:21:27.007075   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 14:21:27.007095   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 14:21:27.007113   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 14:21:27.007130   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 14:21:27.007151   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 14:21:27.007170   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 14:21:27.007187   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 14:21:27.007262   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem (1338 bytes)
	W0505 14:21:27.007299   56262 certs.go:480] ignoring /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210_empty.pem, impossibly tiny 0 bytes
	I0505 14:21:27.007308   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 14:21:27.007341   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem (1078 bytes)
	I0505 14:21:27.007375   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem (1123 bytes)
	I0505 14:21:27.007408   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem (1679 bytes)
	I0505 14:21:27.007476   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:21:27.007517   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /usr/share/ca-certificates/542102.pem
	I0505 14:21:27.007538   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:27.007556   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem -> /usr/share/ca-certificates/54210.pem
	I0505 14:21:27.007581   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:27.007663   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:27.007746   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:27.007820   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:27.007907   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:21:27.036107   56262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0505 14:21:27.039382   56262 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0505 14:21:27.047195   56262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0505 14:21:27.050362   56262 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0505 14:21:27.058524   56262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0505 14:21:27.061585   56262 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0505 14:21:27.069461   56262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0505 14:21:27.072439   56262 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0505 14:21:27.080982   56262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0505 14:21:27.084070   56262 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0505 14:21:27.092062   56262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0505 14:21:27.095149   56262 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0505 14:21:27.103105   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 14:21:27.123887   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 14:21:27.144018   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 14:21:27.164034   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 14:21:27.183960   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0505 14:21:27.204170   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 14:21:27.224085   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 14:21:27.244379   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 14:21:27.264411   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /usr/share/ca-certificates/542102.pem (1708 bytes)
	I0505 14:21:27.283983   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 14:21:27.303697   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem --> /usr/share/ca-certificates/54210.pem (1338 bytes)
	I0505 14:21:27.323613   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0505 14:21:27.337907   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0505 14:21:27.351842   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0505 14:21:27.365462   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0505 14:21:27.379337   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0505 14:21:27.393337   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0505 14:21:27.406867   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0505 14:21:27.420462   56262 ssh_runner.go:195] Run: openssl version
	I0505 14:21:27.425063   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/542102.pem && ln -fs /usr/share/ca-certificates/542102.pem /etc/ssl/certs/542102.pem"
	I0505 14:21:27.433747   56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/542102.pem
	I0505 14:21:27.437275   56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:08 /usr/share/ca-certificates/542102.pem
	I0505 14:21:27.437314   56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/542102.pem
	I0505 14:21:27.441663   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/542102.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 14:21:27.450070   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 14:21:27.458559   56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:27.462027   56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:27.462088   56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:27.466402   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 14:21:27.474903   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54210.pem && ln -fs /usr/share/ca-certificates/54210.pem /etc/ssl/certs/54210.pem"
	I0505 14:21:27.484026   56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54210.pem
	I0505 14:21:27.487471   56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:08 /usr/share/ca-certificates/54210.pem
	I0505 14:21:27.487506   56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54210.pem
	I0505 14:21:27.491806   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/54210.pem /etc/ssl/certs/51391683.0"
	I0505 14:21:27.500356   56262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 14:21:27.503912   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 14:21:27.508255   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 14:21:27.512583   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 14:21:27.516997   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 14:21:27.521261   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 14:21:27.525514   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 14:21:27.529849   56262 kubeadm.go:928] updating node {m02 192.169.0.52 8443 v1.30.0 docker true true} ...
	I0505 14:21:27.529904   56262 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-671000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 14:21:27.529918   56262 kube-vip.go:111] generating kube-vip config ...
	I0505 14:21:27.529952   56262 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 14:21:27.542376   56262 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 14:21:27.542421   56262 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 14:21:27.542477   56262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 14:21:27.550208   56262 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 14:21:27.550254   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0505 14:21:27.557751   56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0505 14:21:27.571295   56262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 14:21:27.584791   56262 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0505 14:21:27.598438   56262 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0505 14:21:27.601396   56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:21:27.610834   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:27.705062   56262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:21:27.720000   56262 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:21:27.761967   56262 out.go:177] * Verifying Kubernetes components...
	I0505 14:21:27.720191   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:21:27.783193   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:27.916127   56262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:21:27.937011   56262 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:21:27.937198   56262 kapi.go:59] client config for ha-671000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x6257220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0505 14:21:27.937233   56262 kubeadm.go:477] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.51:8443
	I0505 14:21:27.937400   56262 node_ready.go:35] waiting up to 6m0s for node "ha-671000-m02" to be "Ready" ...
	I0505 14:21:27.937478   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:27.937483   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:27.937491   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:27.937495   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.141758   56262 round_trippers.go:574] Response Status: 200 OK in 9202 milliseconds
	I0505 14:21:37.151494   56262 node_ready.go:49] node "ha-671000-m02" has status "Ready":"True"
	I0505 14:21:37.151510   56262 node_ready.go:38] duration metric: took 9.212150687s for node "ha-671000-m02" to be "Ready" ...
	I0505 14:21:37.151520   56262 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 14:21:37.151577   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:21:37.151583   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.151590   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.151594   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.191750   56262 round_trippers.go:574] Response Status: 200 OK in 40 milliseconds
	I0505 14:21:37.198443   56262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.198500   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:21:37.198504   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.198511   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.198515   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.209480   56262 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0505 14:21:37.210158   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:37.210166   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.210172   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.210175   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.218742   56262 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0505 14:21:37.219086   56262 pod_ready.go:92] pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:37.219096   56262 pod_ready.go:81] duration metric: took 20.63356ms for pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.219105   56262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.219148   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kjf54
	I0505 14:21:37.219153   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.219162   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.219170   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.221463   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:37.221880   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:37.221889   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.221897   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.221905   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.226727   56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:21:37.227035   56262 pod_ready.go:92] pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:37.227045   56262 pod_ready.go:81] duration metric: took 7.931899ms for pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.227052   56262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.227120   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000
	I0505 14:21:37.227125   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.227131   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.227135   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.228755   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:37.229130   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:37.229137   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.229143   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.229147   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.230595   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:37.230887   56262 pod_ready.go:92] pod "etcd-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:37.230895   56262 pod_ready.go:81] duration metric: took 3.837029ms for pod "etcd-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.230901   56262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.230929   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000-m02
	I0505 14:21:37.230934   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.230939   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.230943   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.232448   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:37.232868   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:37.232875   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.232880   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.232887   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.234369   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:37.234695   56262 pod_ready.go:92] pod "etcd-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:37.234704   56262 pod_ready.go:81] duration metric: took 3.797599ms for pod "etcd-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.234710   56262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.234742   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000-m03
	I0505 14:21:37.234747   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.234753   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.234760   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.236183   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:37.351671   56262 request.go:629] Waited for 115.086464ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:21:37.351703   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:21:37.351742   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.351749   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.351752   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.353285   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:37.353602   56262 pod_ready.go:92] pod "etcd-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:37.353612   56262 pod_ready.go:81] duration metric: took 118.878942ms for pod "etcd-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.353624   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.551816   56262 request.go:629] Waited for 198.124765ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000
	I0505 14:21:37.551893   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000
	I0505 14:21:37.551900   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.551906   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.551909   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.554076   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:37.753242   56262 request.go:629] Waited for 198.55091ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:37.753343   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:37.753355   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.753365   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.753371   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.756033   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:37.756647   56262 pod_ready.go:92] pod "kube-apiserver-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:37.756662   56262 pod_ready.go:81] duration metric: took 402.967586ms for pod "kube-apiserver-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.756670   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.952604   56262 request.go:629] Waited for 195.869842ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m02
	I0505 14:21:37.952645   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m02
	I0505 14:21:37.952654   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.952662   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.952668   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.954903   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:38.151783   56262 request.go:629] Waited for 196.293382ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:38.151830   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:38.151837   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:38.151842   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:38.151847   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:38.156373   56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:21:38.156768   56262 pod_ready.go:92] pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:38.156778   56262 pod_ready.go:81] duration metric: took 400.046736ms for pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:38.156785   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:38.351807   56262 request.go:629] Waited for 194.95401ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m03
	I0505 14:21:38.351854   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m03
	I0505 14:21:38.351862   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:38.351904   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:38.351908   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:38.354097   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:38.552842   56262 request.go:629] Waited for 198.080217ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:21:38.552968   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:21:38.552980   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:38.552990   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:38.552997   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:38.555719   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:38.556135   56262 pod_ready.go:92] pod "kube-apiserver-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:38.556146   56262 pod_ready.go:81] duration metric: took 399.298154ms for pod "kube-apiserver-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:38.556153   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:38.752061   56262 request.go:629] Waited for 195.828299ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000
	I0505 14:21:38.752126   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000
	I0505 14:21:38.752135   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:38.752148   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:38.752158   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:38.754957   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:38.951929   56262 request.go:629] Waited for 196.315529ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:38.951959   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:38.951964   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:38.951969   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:38.951973   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:38.953886   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:38.954275   56262 pod_ready.go:92] pod "kube-controller-manager-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:38.954284   56262 pod_ready.go:81] duration metric: took 398.072724ms for pod "kube-controller-manager-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:38.954297   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:39.151925   56262 request.go:629] Waited for 197.547759ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:39.152007   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:39.152019   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:39.152025   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:39.152029   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:39.157962   56262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0505 14:21:39.352575   56262 request.go:629] Waited for 194.147234ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:39.352619   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:39.352625   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:39.352631   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:39.352635   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:39.356708   56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:21:39.553301   56262 request.go:629] Waited for 97.737035ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:39.553334   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:39.553340   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:39.553346   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:39.553351   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:39.555371   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:39.752052   56262 request.go:629] Waited for 196.251955ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:39.752134   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:39.752145   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:39.752153   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:39.752158   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:39.754627   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:39.955025   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:39.955059   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:39.955067   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:39.955072   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:39.956871   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:40.152049   56262 request.go:629] Waited for 194.641301ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:40.152132   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:40.152171   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:40.152184   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:40.152191   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:40.154660   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:40.456022   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:40.456041   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:40.456050   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:40.456056   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:40.458617   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:40.552124   56262 request.go:629] Waited for 92.99221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:40.552206   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:40.552212   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:40.552220   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:40.552225   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:40.554220   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:40.956144   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:40.956162   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:40.956168   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:40.956172   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:40.958759   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:40.959215   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:40.959223   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:40.959229   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:40.959232   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:40.960907   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:40.961228   56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:41.455646   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:41.455689   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:41.455698   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:41.455722   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:41.457872   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:41.458331   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:41.458339   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:41.458344   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:41.458355   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:41.460082   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:41.955474   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:41.955516   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:41.955524   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:41.955528   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:41.957597   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:41.958178   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:41.958186   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:41.958190   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:41.958193   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:41.960269   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:42.454954   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:42.454969   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:42.454975   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:42.454978   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:42.456939   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:42.457382   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:42.457390   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:42.457395   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:42.457398   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:42.459026   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:42.955443   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:42.955465   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:42.955493   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:42.955500   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:42.957908   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:42.958355   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:42.958362   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:42.958368   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:42.958371   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:42.959853   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:43.455723   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:43.455776   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:43.455798   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:43.455806   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:43.458560   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:43.458997   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:43.459004   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:43.459009   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:43.459013   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:43.460509   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:43.460811   56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:43.955429   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:43.955470   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:43.955481   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:43.955487   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:43.957836   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:43.958298   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:43.958305   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:43.958310   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:43.958320   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:43.960083   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:44.455061   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:44.455081   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:44.455088   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:44.455091   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:44.458998   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:44.459504   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:44.459511   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:44.459517   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:44.459521   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:44.461518   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:44.956537   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:44.956577   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:44.956598   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:44.956604   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:44.959253   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:44.959715   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:44.959723   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:44.959729   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:44.959733   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:44.961411   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:45.455377   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:45.455402   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:45.455414   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:45.455420   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:45.458080   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:45.458718   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:45.458729   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:45.458736   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:45.458752   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:45.463742   56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:21:45.464348   56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:45.955580   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:45.955620   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:45.955630   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:45.955635   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:45.957968   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:45.958442   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:45.958449   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:45.958455   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:45.958466   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:45.959999   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:46.457118   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:46.457136   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:46.457145   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:46.457149   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:46.459543   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:46.460023   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:46.460031   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:46.460036   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:46.460047   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:46.461647   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:46.956302   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:46.956318   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:46.956324   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:46.956326   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:46.958416   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:46.958859   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:46.958866   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:46.958872   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:46.958874   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:46.960501   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:47.456753   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:47.456797   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:47.456806   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:47.456812   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:47.458891   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:47.459328   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:47.459336   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:47.459342   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:47.459345   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:47.460911   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:47.955503   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:47.955545   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:47.955558   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:47.955564   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:47.959575   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:47.960158   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:47.960166   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:47.960171   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:47.960175   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:47.961799   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:47.962164   56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:48.456730   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:48.456747   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:48.456753   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:48.456757   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:48.460539   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:48.461047   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:48.461055   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:48.461061   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:48.461064   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:48.465508   56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:21:48.465989   56262 pod_ready.go:92] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:48.465998   56262 pod_ready.go:81] duration metric: took 9.510763792s for pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:48.466006   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:48.466042   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m03
	I0505 14:21:48.466047   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:48.466052   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:48.466055   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:48.472370   56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 14:21:48.473005   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:21:48.473012   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:48.473017   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:48.473020   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:48.481996   56262 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0505 14:21:48.482501   56262 pod_ready.go:92] pod "kube-controller-manager-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:48.482510   56262 pod_ready.go:81] duration metric: took 16.497528ms for pod "kube-controller-manager-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:48.482517   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5jwqs" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:48.482551   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:48.482556   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:48.482561   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:48.482565   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:48.490468   56262 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0505 14:21:48.491138   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:48.491145   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:48.491151   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:48.491155   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:48.494380   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:48.983087   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:49.004024   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:49.004031   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:49.004035   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:49.006380   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:49.007016   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:49.007024   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:49.007030   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:49.007033   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:49.008914   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:49.483919   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:49.483931   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:49.483938   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:49.483941   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:49.486104   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:49.486673   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:49.486681   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:49.486687   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:49.486691   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:49.488609   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:49.983081   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:49.983096   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:49.983104   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:49.983108   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:49.985873   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:49.986420   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:49.986428   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:49.986434   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:49.986437   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:49.988349   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:50.482957   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:50.482970   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:50.482976   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:50.482980   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:50.485479   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:50.485920   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:50.485927   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:50.485934   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:50.485938   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:50.487720   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:50.488107   56262 pod_ready.go:102] pod "kube-proxy-5jwqs" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:50.983210   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:50.983225   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:50.983232   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:50.983236   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:50.986255   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:50.986840   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:50.986849   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:50.986855   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:50.986866   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:50.989948   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:51.483355   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:51.483374   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:51.483388   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:51.483395   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:51.486820   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:51.487280   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:51.487287   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:51.487293   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:51.487297   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:51.489325   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:51.983090   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:51.983105   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:51.983112   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:51.983115   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:51.984988   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:51.985393   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:51.985401   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:51.985405   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:51.985410   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:51.986930   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:52.484493   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:52.484507   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:52.484516   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:52.484521   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:52.487250   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:52.487686   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:52.487694   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:52.487698   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:52.487702   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:52.489501   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:52.489895   56262 pod_ready.go:102] pod "kube-proxy-5jwqs" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:52.983025   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:52.983048   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:52.983059   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:52.983066   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:52.986110   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:52.986621   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:52.986629   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:52.986634   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:52.986639   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:52.988098   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:53.484742   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:53.484762   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:53.484773   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:53.484779   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:53.488010   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:53.488477   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:53.488487   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:53.488495   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:53.488501   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:53.490598   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:53.982981   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:54.035555   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.035577   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.035582   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.038056   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:54.038420   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:54.038427   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.038431   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.038436   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.040740   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:54.483231   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:54.483250   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.483259   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.483268   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.486904   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:54.487432   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:54.487440   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.487445   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.487453   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.489085   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:54.489450   56262 pod_ready.go:92] pod "kube-proxy-5jwqs" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:54.489459   56262 pod_ready.go:81] duration metric: took 6.006607245s for pod "kube-proxy-5jwqs" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:54.489472   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b45s6" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:54.489506   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b45s6
	I0505 14:21:54.489511   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.489516   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.489520   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.491341   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:54.492125   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m04
	I0505 14:21:54.492155   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.492161   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.492166   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.494017   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:54.494387   56262 pod_ready.go:92] pod "kube-proxy-b45s6" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:54.494395   56262 pod_ready.go:81] duration metric: took 4.917824ms for pod "kube-proxy-b45s6" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:54.494401   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kppdj" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:54.494436   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:54.494441   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.494447   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.494452   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.496166   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:54.496620   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:54.496627   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.496633   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.496637   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.498306   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:54.996074   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:54.996123   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.996136   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.996145   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.999201   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:54.999706   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:54.999714   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.999720   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.999724   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:55.001519   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:55.495423   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:55.495482   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:55.495494   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:55.495500   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:55.498280   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:55.498730   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:55.498738   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:55.498744   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:55.498748   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:55.500462   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:55.995317   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:55.995337   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:55.995349   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:55.995356   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:55.998789   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:55.999222   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:55.999231   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:55.999238   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:55.999241   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:56.001041   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:56.494888   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:56.494946   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:56.494958   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:56.494968   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:56.497790   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:56.498347   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:56.498358   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:56.498365   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:56.498371   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:56.500278   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:56.500656   56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:56.994875   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:56.994892   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:56.994900   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:56.994906   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:56.998618   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:56.999206   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:56.999214   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:56.999220   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:56.999223   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:57.000855   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:57.495334   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:57.495358   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:57.495370   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:57.495375   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:57.498502   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:57.498951   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:57.498958   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:57.498963   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:57.498966   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:57.500746   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:57.995520   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:57.995543   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:57.995579   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:57.995598   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:57.998407   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:57.998972   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:57.998979   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:57.998985   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:57.999001   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:58.000625   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:58.495031   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:58.495049   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:58.495061   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:58.495067   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:58.498099   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:58.498667   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:58.498677   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:58.498685   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:58.498691   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:58.500315   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:58.995219   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:59.001733   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:59.001744   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:59.001750   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:59.004276   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:59.004776   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:59.004783   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:59.004788   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:59.004792   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:59.006346   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:59.006731   56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:59.495209   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:59.495224   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:59.495243   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:59.495269   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:59.498470   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:59.498897   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:59.498905   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:59.498911   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:59.498915   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:59.501440   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:59.995151   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:59.995179   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:59.995191   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:59.995198   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:59.998453   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:59.999020   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:59.999031   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:59.999039   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:59.999043   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:00.000983   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:00.495135   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:00.495148   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:00.495154   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:00.495158   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:00.498254   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:22:00.499175   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:00.499184   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:00.499190   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:00.499193   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:00.501895   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:00.995194   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:00.995216   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:00.995229   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:00.995237   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:00.998468   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:22:00.998920   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:00.998926   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:00.998932   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:00.998935   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:01.000600   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:01.494835   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:01.494860   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:01.494871   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:01.494877   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:01.497889   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:22:01.498547   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:01.498554   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:01.498558   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:01.498561   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:01.500447   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:01.500751   56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
	I0505 14:22:01.996453   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:01.996472   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:01.996483   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:01.996490   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:01.999407   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:01.999918   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:01.999925   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:01.999931   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:01.999934   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:02.001706   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:02.495361   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:02.495382   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:02.495393   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:02.495400   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:02.498902   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:22:02.499504   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:02.499511   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:02.499517   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:02.499521   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:02.501049   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:02.995527   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:02.995548   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:02.995559   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:02.995565   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:02.998530   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:02.998981   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:02.998988   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:02.998994   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:02.998999   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:03.000798   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:03.495714   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:03.495730   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:03.495737   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:03.495741   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:03.498051   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:03.498563   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:03.498571   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:03.498576   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:03.498588   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:03.500374   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:03.995061   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:04.002434   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:04.002442   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:04.002447   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:04.004861   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:04.005402   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:04.005409   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:04.005415   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:04.005418   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:04.011753   56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 14:22:04.012403   56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
	I0505 14:22:04.494873   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:04.494893   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:04.494902   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:04.494906   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:04.497460   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:04.497938   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:04.497946   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:04.497951   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:04.497960   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:04.499356   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:04.995159   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:04.995178   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:04.995188   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:04.995195   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:04.998687   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:22:04.999335   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:04.999342   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:04.999348   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:04.999353   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.000905   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.494984   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:05.494997   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.495003   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.495007   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.497333   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:05.497727   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:05.497735   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.497741   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.497744   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.499501   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.500069   56262 pod_ready.go:92] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"True"
	I0505 14:22:05.500079   56262 pod_ready.go:81] duration metric: took 11.005361676s for pod "kube-proxy-kppdj" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.500095   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zwgd2" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.500132   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zwgd2
	I0505 14:22:05.500137   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.500142   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.500146   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.502320   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:05.502750   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:22:05.502757   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.502763   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.502767   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.504769   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.505126   56262 pod_ready.go:92] pod "kube-proxy-zwgd2" in "kube-system" namespace has status "Ready":"True"
	I0505 14:22:05.505135   56262 pod_ready.go:81] duration metric: took 5.036025ms for pod "kube-proxy-zwgd2" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.505142   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.505179   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000
	I0505 14:22:05.505184   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.505189   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.505194   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.507083   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.507461   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:05.507468   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.507473   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.507477   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.509224   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.509709   56262 pod_ready.go:92] pod "kube-scheduler-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:22:05.509724   56262 pod_ready.go:81] duration metric: took 4.57068ms for pod "kube-scheduler-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.509732   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.509767   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000-m02
	I0505 14:22:05.509771   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.509777   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.509780   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.511597   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.511989   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:22:05.511996   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.512000   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.512010   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.514080   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:05.514548   56262 pod_ready.go:92] pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:22:05.514556   56262 pod_ready.go:81] duration metric: took 4.819427ms for pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.514563   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.514599   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000-m03
	I0505 14:22:05.514603   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.514609   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.514612   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.516436   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.516907   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:22:05.516914   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.516919   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.516923   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.519043   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:05.519280   56262 pod_ready.go:92] pod "kube-scheduler-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 14:22:05.519288   56262 pod_ready.go:81] duration metric: took 4.719804ms for pod "kube-scheduler-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.519294   56262 pod_ready.go:38] duration metric: took 28.365933714s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 14:22:05.519320   56262 api_server.go:52] waiting for apiserver process to appear ...
	I0505 14:22:05.519375   56262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:22:05.533426   56262 api_server.go:72] duration metric: took 37.809561996s to wait for apiserver process to appear ...
	I0505 14:22:05.533438   56262 api_server.go:88] waiting for apiserver healthz status ...
	I0505 14:22:05.533454   56262 api_server.go:253] Checking apiserver healthz at https://192.169.0.51:8443/healthz ...
	I0505 14:22:05.537141   56262 api_server.go:279] https://192.169.0.51:8443/healthz returned 200:
	ok
	I0505 14:22:05.537173   56262 round_trippers.go:463] GET https://192.169.0.51:8443/version
	I0505 14:22:05.537183   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.537191   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.537195   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.537884   56262 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0505 14:22:05.538028   56262 api_server.go:141] control plane version: v1.30.0
	I0505 14:22:05.538038   56262 api_server.go:131] duration metric: took 4.594882ms to wait for apiserver health ...
	I0505 14:22:05.538049   56262 system_pods.go:43] waiting for kube-system pods to appear ...
	I0505 14:22:05.696401   56262 request.go:629] Waited for 158.305976ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:22:05.696517   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:22:05.696529   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.696539   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.696547   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.703009   56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 14:22:05.708412   56262 system_pods.go:59] 26 kube-system pods found
	I0505 14:22:05.708432   56262 system_pods.go:61] "coredns-7db6d8ff4d-hqtd2" [e76b43f2-8189-4e5d-adc3-ced554e9ee07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0505 14:22:05.708439   56262 system_pods.go:61] "coredns-7db6d8ff4d-kjf54" [c780145e-9d82-4451-94e8-dee09a63eadb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0505 14:22:05.708445   56262 system_pods.go:61] "etcd-ha-671000" [ea35bd5e-5a34-48e9-a9b7-4b200b88fb13] Running
	I0505 14:22:05.708448   56262 system_pods.go:61] "etcd-ha-671000-m02" [15f721f6-9618-44f4-9160-dbf9f0a41f73] Running
	I0505 14:22:05.708451   56262 system_pods.go:61] "etcd-ha-671000-m03" [67d2962f-d3f7-42d2-8334-cc42cb3ca5a5] Running
	I0505 14:22:05.708458   56262 system_pods.go:61] "kindnet-cbt9x" [c35bdc79-4b12-4822-ae38-767c7d16c96a] Running
	I0505 14:22:05.708462   56262 system_pods.go:61] "kindnet-ffg2p" [043d485d-6127-4de3-9e4f-cfbf554fa987] Running
	I0505 14:22:05.708464   56262 system_pods.go:61] "kindnet-kn94d" [863e615e-f22a-4d15-8510-4f5c7a42b8cd] Running
	I0505 14:22:05.708468   56262 system_pods.go:61] "kindnet-zvz9x" [17260177-9933-46e9-85d2-86fe51806c25] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0505 14:22:05.708471   56262 system_pods.go:61] "kube-apiserver-ha-671000" [a6f2585c-aec7-4ba9-aa78-e22c55a798ea] Running
	I0505 14:22:05.708474   56262 system_pods.go:61] "kube-apiserver-ha-671000-m02" [bbb10014-5b6a-4377-8e62-a26e6925ee07] Running
	I0505 14:22:05.708477   56262 system_pods.go:61] "kube-apiserver-ha-671000-m03" [83168cbb-d4d0-4793-9f59-1c8cd4c2616f] Running
	I0505 14:22:05.708482   56262 system_pods.go:61] "kube-controller-manager-ha-671000" [9f4e9073-99da-4ed5-8b5f-72106d630807] Running
	I0505 14:22:05.708487   56262 system_pods.go:61] "kube-controller-manager-ha-671000-m02" [074a2d58-b5a5-4fd7-8c03-c1a357ee0c4f] Running
	I0505 14:22:05.708489   56262 system_pods.go:61] "kube-controller-manager-ha-671000-m03" [c7fa4cb4-20a0-431f-8f1a-fb9bf2f0d702] Running
	I0505 14:22:05.708493   56262 system_pods.go:61] "kube-proxy-5jwqs" [72f1cbf9-ca3e-4354-a8f8-7239c77af74a] Running
	I0505 14:22:05.708495   56262 system_pods.go:61] "kube-proxy-b45s6" [4d403d96-8102-44d7-a76f-3a64f30a7132] Running
	I0505 14:22:05.708497   56262 system_pods.go:61] "kube-proxy-kppdj" [5b47d66e-31b1-4892-85ef-0c3ad3bec4cb] Running
	I0505 14:22:05.708500   56262 system_pods.go:61] "kube-proxy-zwgd2" [e87cf8e2-923f-499e-a740-60cd8b02b805] Running
	I0505 14:22:05.708502   56262 system_pods.go:61] "kube-scheduler-ha-671000" [1ae249c2-7cd6-4c14-80aa-1d88d491dfc2] Running
	I0505 14:22:05.708505   56262 system_pods.go:61] "kube-scheduler-ha-671000-m02" [27dd9d5f-8e2a-4597-b743-0c79fa5df5b1] Running
	I0505 14:22:05.708507   56262 system_pods.go:61] "kube-scheduler-ha-671000-m03" [0c85a6ad-ae3e-4895-8a7f-f36385b1eb0b] Running
	I0505 14:22:05.708510   56262 system_pods.go:61] "kube-vip-ha-671000" [dcc7956d-0333-45ed-afff-b8429485ef9a] Running
	I0505 14:22:05.708512   56262 system_pods.go:61] "kube-vip-ha-671000-m02" [9a09b965-61cb-4026-9bcd-0daa29f18c86] Running
	I0505 14:22:05.708515   56262 system_pods.go:61] "kube-vip-ha-671000-m03" [4866dc28-b7e1-4387-8e4a-cf819f426faa] Running
	I0505 14:22:05.708520   56262 system_pods.go:61] "storage-provisioner" [f376315c-5f9b-46f4-b295-6d7d025063bc] Running
	I0505 14:22:05.708525   56262 system_pods.go:74] duration metric: took 170.469417ms to wait for pod list to return data ...
	I0505 14:22:05.708531   56262 default_sa.go:34] waiting for default service account to be created ...
	I0505 14:22:05.897069   56262 request.go:629] Waited for 188.474109ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/default/serviceaccounts
	I0505 14:22:05.897179   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/default/serviceaccounts
	I0505 14:22:05.897186   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.897194   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.897199   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.950188   56262 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0505 14:22:05.950392   56262 default_sa.go:45] found service account: "default"
	I0505 14:22:05.950405   56262 default_sa.go:55] duration metric: took 241.864725ms for default service account to be created ...
	I0505 14:22:05.950412   56262 system_pods.go:116] waiting for k8s-apps to be running ...
	I0505 14:22:06.095263   56262 request.go:629] Waited for 144.804696ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:22:06.095366   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:22:06.095376   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:06.095388   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:06.095395   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:06.102144   56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 14:22:06.107768   56262 system_pods.go:86] 26 kube-system pods found
	I0505 14:22:06.107783   56262 system_pods.go:89] "coredns-7db6d8ff4d-hqtd2" [e76b43f2-8189-4e5d-adc3-ced554e9ee07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0505 14:22:06.107794   56262 system_pods.go:89] "coredns-7db6d8ff4d-kjf54" [c780145e-9d82-4451-94e8-dee09a63eadb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0505 14:22:06.107800   56262 system_pods.go:89] "etcd-ha-671000" [ea35bd5e-5a34-48e9-a9b7-4b200b88fb13] Running
	I0505 14:22:06.107803   56262 system_pods.go:89] "etcd-ha-671000-m02" [15f721f6-9618-44f4-9160-dbf9f0a41f73] Running
	I0505 14:22:06.107808   56262 system_pods.go:89] "etcd-ha-671000-m03" [67d2962f-d3f7-42d2-8334-cc42cb3ca5a5] Running
	I0505 14:22:06.107811   56262 system_pods.go:89] "kindnet-cbt9x" [c35bdc79-4b12-4822-ae38-767c7d16c96a] Running
	I0505 14:22:06.107815   56262 system_pods.go:89] "kindnet-ffg2p" [043d485d-6127-4de3-9e4f-cfbf554fa987] Running
	I0505 14:22:06.107818   56262 system_pods.go:89] "kindnet-kn94d" [863e615e-f22a-4d15-8510-4f5c7a42b8cd] Running
	I0505 14:22:06.107823   56262 system_pods.go:89] "kindnet-zvz9x" [17260177-9933-46e9-85d2-86fe51806c25] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0505 14:22:06.107826   56262 system_pods.go:89] "kube-apiserver-ha-671000" [a6f2585c-aec7-4ba9-aa78-e22c55a798ea] Running
	I0505 14:22:06.107831   56262 system_pods.go:89] "kube-apiserver-ha-671000-m02" [bbb10014-5b6a-4377-8e62-a26e6925ee07] Running
	I0505 14:22:06.107834   56262 system_pods.go:89] "kube-apiserver-ha-671000-m03" [83168cbb-d4d0-4793-9f59-1c8cd4c2616f] Running
	I0505 14:22:06.107838   56262 system_pods.go:89] "kube-controller-manager-ha-671000" [9f4e9073-99da-4ed5-8b5f-72106d630807] Running
	I0505 14:22:06.107842   56262 system_pods.go:89] "kube-controller-manager-ha-671000-m02" [074a2d58-b5a5-4fd7-8c03-c1a357ee0c4f] Running
	I0505 14:22:06.107847   56262 system_pods.go:89] "kube-controller-manager-ha-671000-m03" [c7fa4cb4-20a0-431f-8f1a-fb9bf2f0d702] Running
	I0505 14:22:06.107854   56262 system_pods.go:89] "kube-proxy-5jwqs" [72f1cbf9-ca3e-4354-a8f8-7239c77af74a] Running
	I0505 14:22:06.107862   56262 system_pods.go:89] "kube-proxy-b45s6" [4d403d96-8102-44d7-a76f-3a64f30a7132] Running
	I0505 14:22:06.107866   56262 system_pods.go:89] "kube-proxy-kppdj" [5b47d66e-31b1-4892-85ef-0c3ad3bec4cb] Running
	I0505 14:22:06.107869   56262 system_pods.go:89] "kube-proxy-zwgd2" [e87cf8e2-923f-499e-a740-60cd8b02b805] Running
	I0505 14:22:06.107874   56262 system_pods.go:89] "kube-scheduler-ha-671000" [1ae249c2-7cd6-4c14-80aa-1d88d491dfc2] Running
	I0505 14:22:06.107877   56262 system_pods.go:89] "kube-scheduler-ha-671000-m02" [27dd9d5f-8e2a-4597-b743-0c79fa5df5b1] Running
	I0505 14:22:06.107887   56262 system_pods.go:89] "kube-scheduler-ha-671000-m03" [0c85a6ad-ae3e-4895-8a7f-f36385b1eb0b] Running
	I0505 14:22:06.107890   56262 system_pods.go:89] "kube-vip-ha-671000" [dcc7956d-0333-45ed-afff-b8429485ef9a] Running
	I0505 14:22:06.107894   56262 system_pods.go:89] "kube-vip-ha-671000-m02" [9a09b965-61cb-4026-9bcd-0daa29f18c86] Running
	I0505 14:22:06.107897   56262 system_pods.go:89] "kube-vip-ha-671000-m03" [4866dc28-b7e1-4387-8e4a-cf819f426faa] Running
	I0505 14:22:06.107900   56262 system_pods.go:89] "storage-provisioner" [f376315c-5f9b-46f4-b295-6d7d025063bc] Running
	I0505 14:22:06.107905   56262 system_pods.go:126] duration metric: took 157.48572ms to wait for k8s-apps to be running ...
	I0505 14:22:06.107910   56262 system_svc.go:44] waiting for kubelet service to be running ....
	I0505 14:22:06.107954   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 14:22:06.119916   56262 system_svc.go:56] duration metric: took 12.002036ms WaitForService to wait for kubelet
	I0505 14:22:06.119930   56262 kubeadm.go:576] duration metric: took 38.396059047s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:22:06.119941   56262 node_conditions.go:102] verifying NodePressure condition ...
	I0505 14:22:06.295252   56262 request.go:629] Waited for 175.271788ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes
	I0505 14:22:06.295332   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes
	I0505 14:22:06.295338   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:06.295345   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:06.295350   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:06.299820   56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:22:06.300760   56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 14:22:06.300774   56262 node_conditions.go:123] node cpu capacity is 2
	I0505 14:22:06.300783   56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 14:22:06.300787   56262 node_conditions.go:123] node cpu capacity is 2
	I0505 14:22:06.300791   56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 14:22:06.300794   56262 node_conditions.go:123] node cpu capacity is 2
	I0505 14:22:06.300797   56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 14:22:06.300801   56262 node_conditions.go:123] node cpu capacity is 2
	I0505 14:22:06.300804   56262 node_conditions.go:105] duration metric: took 180.85639ms to run NodePressure ...
	I0505 14:22:06.300811   56262 start.go:240] waiting for startup goroutines ...
	I0505 14:22:06.300829   56262 start.go:254] writing updated cluster config ...
	I0505 14:22:06.322636   56262 out.go:177] 
	I0505 14:22:06.343913   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:22:06.344042   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:22:06.366539   56262 out.go:177] * Starting "ha-671000-m03" control-plane node in "ha-671000" cluster
	I0505 14:22:06.408466   56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:22:06.408493   56262 cache.go:56] Caching tarball of preloaded images
	I0505 14:22:06.408686   56262 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0505 14:22:06.408703   56262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:22:06.408834   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:22:06.409908   56262 start.go:360] acquireMachinesLock for ha-671000-m03: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:22:06.409993   56262 start.go:364] duration metric: took 67.566µs to acquireMachinesLock for "ha-671000-m03"
	I0505 14:22:06.410011   56262 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:22:06.410016   56262 fix.go:54] fixHost starting: m03
	I0505 14:22:06.410315   56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:22:06.410333   56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:22:06.419592   56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57925
	I0505 14:22:06.419993   56262 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:22:06.420359   56262 main.go:141] libmachine: Using API Version  1
	I0505 14:22:06.420375   56262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:22:06.420588   56262 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:22:06.420701   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:06.420780   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetState
	I0505 14:22:06.420862   56262 main.go:141] libmachine: (ha-671000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:22:06.420955   56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid from json: 55740
	I0505 14:22:06.421873   56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid 55740 missing from process table
	I0505 14:22:06.421938   56262 fix.go:112] recreateIfNeeded on ha-671000-m03: state=Stopped err=<nil>
	I0505 14:22:06.421958   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	W0505 14:22:06.422054   56262 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:22:06.443498   56262 out.go:177] * Restarting existing hyperkit VM for "ha-671000-m03" ...
	I0505 14:22:06.485588   56262 main.go:141] libmachine: (ha-671000-m03) Calling .Start
	I0505 14:22:06.485823   56262 main.go:141] libmachine: (ha-671000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:22:06.485876   56262 main.go:141] libmachine: (ha-671000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid
	I0505 14:22:06.487603   56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid 55740 missing from process table
	I0505 14:22:06.487617   56262 main.go:141] libmachine: (ha-671000-m03) DBG | pid 55740 is in state "Stopped"
	I0505 14:22:06.487633   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid...
	I0505 14:22:06.488242   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Using UUID be90591f-7869-4905-ae38-2f481381ca7c
	I0505 14:22:06.514163   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Generated MAC ce:17:a:56:1e:f8
	I0505 14:22:06.514197   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
	I0505 14:22:06.514318   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"be90591f-7869-4905-ae38-2f481381ca7c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003be9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:22:06.514365   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"be90591f-7869-4905-ae38-2f481381ca7c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003be9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:22:06.514413   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "be90591f-7869-4905-ae38-2f481381ca7c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/ha-671000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/
machines/ha-671000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
	I0505 14:22:06.514460   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U be90591f-7869-4905-ae38-2f481381ca7c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/ha-671000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
	I0505 14:22:06.514470   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0505 14:22:06.515957   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Pid is 56300
	I0505 14:22:06.516349   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Attempt 0
	I0505 14:22:06.516370   56262 main.go:141] libmachine: (ha-671000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:22:06.516444   56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid from json: 56300
	I0505 14:22:06.518246   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Searching for ce:17:a:56:1e:f8 in /var/db/dhcpd_leases ...
	I0505 14:22:06.518360   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0505 14:22:06.518376   56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x663949ce}
	I0505 14:22:06.518417   56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x663949ba}
	I0505 14:22:06.518433   56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x6637f817}
	I0505 14:22:06.518449   56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x663948d2}
	I0505 14:22:06.518457   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Found match: ce:17:a:56:1e:f8
	I0505 14:22:06.518467   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetConfigRaw
	I0505 14:22:06.518473   56262 main.go:141] libmachine: (ha-671000-m03) DBG | IP: 192.169.0.53
	I0505 14:22:06.519132   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetIP
	I0505 14:22:06.519357   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:22:06.519808   56262 machine.go:94] provisionDockerMachine start ...
	I0505 14:22:06.519818   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:06.519942   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:06.520079   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:06.520182   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:06.520284   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:06.520381   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:06.520488   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:06.520648   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:06.520655   56262 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 14:22:06.524407   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0505 14:22:06.532556   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0505 14:22:06.533607   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:22:06.533622   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:22:06.533633   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:22:06.533644   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:22:06.917916   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0505 14:22:06.917942   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0505 14:22:07.032632   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:22:07.032653   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:22:07.032677   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:22:07.032689   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:22:07.033533   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0505 14:22:07.033546   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0505 14:22:12.402771   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0505 14:22:12.402786   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0505 14:22:12.402806   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0505 14:22:12.426606   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0505 14:22:41.581350   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 14:22:41.581367   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetMachineName
	I0505 14:22:41.581506   56262 buildroot.go:166] provisioning hostname "ha-671000-m03"
	I0505 14:22:41.581517   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetMachineName
	I0505 14:22:41.581600   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.581683   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:41.581781   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.581875   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.581960   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:41.582100   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:41.582238   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:41.582247   56262 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671000-m03 && echo "ha-671000-m03" | sudo tee /etc/hostname
	I0505 14:22:41.647083   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000-m03
	
	I0505 14:22:41.647098   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.647232   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:41.647343   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.647430   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.647521   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:41.647657   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:41.647849   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:41.647862   56262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 14:22:41.709306   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 14:22:41.709326   56262 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
	I0505 14:22:41.709344   56262 buildroot.go:174] setting up certificates
	I0505 14:22:41.709357   56262 provision.go:84] configureAuth start
	I0505 14:22:41.709363   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetMachineName
	I0505 14:22:41.709499   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetIP
	I0505 14:22:41.709593   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.709680   56262 provision.go:143] copyHostCerts
	I0505 14:22:41.709715   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:22:41.709786   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
	I0505 14:22:41.709792   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:22:41.709937   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
	I0505 14:22:41.710168   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:22:41.710212   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
	I0505 14:22:41.710217   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:22:41.710297   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
	I0505 14:22:41.710445   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:22:41.710490   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
	I0505 14:22:41.710497   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:22:41.710575   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
	I0505 14:22:41.710718   56262 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000-m03 san=[127.0.0.1 192.169.0.53 ha-671000-m03 localhost minikube]
	I0505 14:22:41.753782   56262 provision.go:177] copyRemoteCerts
	I0505 14:22:41.753842   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 14:22:41.753857   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.753999   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:41.754106   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.754195   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:41.754274   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
	I0505 14:22:41.788993   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 14:22:41.789066   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 14:22:41.808008   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 14:22:41.808084   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0505 14:22:41.828147   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 14:22:41.828228   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 14:22:41.848543   56262 provision.go:87] duration metric: took 139.178952ms to configureAuth
	I0505 14:22:41.848558   56262 buildroot.go:189] setting minikube options for container-runtime
	I0505 14:22:41.848732   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:22:41.848746   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:41.848890   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.848974   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:41.849066   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.849145   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.849226   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:41.849346   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:41.849468   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:41.849476   56262 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 14:22:41.905134   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 14:22:41.905147   56262 buildroot.go:70] root file system type: tmpfs
	I0505 14:22:41.905226   56262 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 14:22:41.905236   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.905372   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:41.905459   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.905559   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.905645   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:41.905773   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:41.905913   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:41.905965   56262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.51"
	Environment="NO_PROXY=192.169.0.51,192.169.0.52"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 14:22:41.971506   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.51
	Environment=NO_PROXY=192.169.0.51,192.169.0.52
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 14:22:41.971532   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.971667   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:41.971753   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.971832   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.971919   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:41.972061   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:41.972206   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:41.972218   56262 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 14:22:43.586757   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0505 14:22:43.586772   56262 machine.go:97] duration metric: took 37.066967123s to provisionDockerMachine
	I0505 14:22:43.586795   56262 start.go:293] postStartSetup for "ha-671000-m03" (driver="hyperkit")
	I0505 14:22:43.586804   56262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 14:22:43.586816   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:43.587008   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 14:22:43.587022   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:43.587109   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:43.587250   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:43.587368   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:43.587470   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
	I0505 14:22:43.621728   56262 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 14:22:43.624913   56262 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 14:22:43.624927   56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
	I0505 14:22:43.625027   56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
	I0505 14:22:43.625208   56262 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
	I0505 14:22:43.625215   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
	I0505 14:22:43.625422   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 14:22:43.632883   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:22:43.652930   56262 start.go:296] duration metric: took 66.125789ms for postStartSetup
	I0505 14:22:43.652964   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:43.653131   56262 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0505 14:22:43.653145   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:43.653240   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:43.653328   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:43.653413   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:43.653505   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
	I0505 14:22:43.687474   56262 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0505 14:22:43.687532   56262 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0505 14:22:43.719424   56262 fix.go:56] duration metric: took 37.309414657s for fixHost
	I0505 14:22:43.719447   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:43.719581   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:43.719680   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:43.719767   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:43.719859   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:43.719991   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:43.720140   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:43.720147   56262 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 14:22:43.777003   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944163.917671963
	
	I0505 14:22:43.777016   56262 fix.go:216] guest clock: 1714944163.917671963
	I0505 14:22:43.777022   56262 fix.go:229] Guest: 2024-05-05 14:22:43.917671963 -0700 PDT Remote: 2024-05-05 14:22:43.719438 -0700 PDT m=+114.784889102 (delta=198.233963ms)
	I0505 14:22:43.777033   56262 fix.go:200] guest clock delta is within tolerance: 198.233963ms
	I0505 14:22:43.777036   56262 start.go:83] releasing machines lock for "ha-671000-m03", held for 37.367046714s
	I0505 14:22:43.777054   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:43.777184   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetIP
	I0505 14:22:43.798458   56262 out.go:177] * Found network options:
	I0505 14:22:43.818375   56262 out.go:177]   - NO_PROXY=192.169.0.51,192.169.0.52
	W0505 14:22:43.839196   56262 proxy.go:119] fail to check proxy env: Error ip not in block
	W0505 14:22:43.839212   56262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 14:22:43.839223   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:43.839636   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:43.839763   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:43.839847   56262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 14:22:43.839883   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	W0505 14:22:43.839885   56262 proxy.go:119] fail to check proxy env: Error ip not in block
	W0505 14:22:43.839898   56262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 14:22:43.839953   56262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0505 14:22:43.839970   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:43.839989   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:43.840065   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:43.840123   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:43.840188   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:43.840221   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:43.840303   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
	I0505 14:22:43.840332   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:43.840420   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
	W0505 14:22:43.919168   56262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 14:22:43.919245   56262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 14:22:43.936501   56262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 14:22:43.936515   56262 start.go:494] detecting cgroup driver to use...
	I0505 14:22:43.936582   56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:22:43.953774   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0505 14:22:43.963068   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 14:22:43.972111   56262 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 14:22:43.972163   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 14:22:43.981147   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:22:44.011701   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 14:22:44.020897   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:22:44.030143   56262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 14:22:44.039491   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 14:22:44.048778   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 14:22:44.057937   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 14:22:44.067298   56262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 14:22:44.075698   56262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 14:22:44.083983   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:22:44.200980   56262 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 14:22:44.219877   56262 start.go:494] detecting cgroup driver to use...
	I0505 14:22:44.219946   56262 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 14:22:44.236639   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:22:44.254367   56262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 14:22:44.271268   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:22:44.282915   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:22:44.293466   56262 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0505 14:22:44.317181   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:22:44.327878   56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:22:44.343024   56262 ssh_runner.go:195] Run: which cri-dockerd
	I0505 14:22:44.346054   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 14:22:44.353257   56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 14:22:44.367082   56262 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 14:22:44.465180   56262 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 14:22:44.569600   56262 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 14:22:44.569629   56262 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 14:22:44.584431   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:22:44.680947   56262 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:23:45.736510   56262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.056089884s)
	I0505 14:23:45.736595   56262 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0505 14:23:45.770790   56262 out.go:177] 
	W0505 14:23:45.791249   56262 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 05 21:22:41 ha-671000-m03 systemd[1]: Starting Docker Application Container Engine...
	May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.352208248Z" level=info msg="Starting up"
	May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.353022730Z" level=info msg="containerd not running, starting managed containerd"
	May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.358767057Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.373539189Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388000547Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388073973Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388137944Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388171760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388313706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388355785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388477111Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388518957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388551610Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388580389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388726935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388950191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390520791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390570725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390706880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390751886Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390888815Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390940476Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390972496Z" level=info msg="metadata content store policy set" policy=shared
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394800432Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394883868Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394961138Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395000278Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395036706Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395111009Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395337703Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395418767Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395454129Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395484232Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395514263Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395546554Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395576938Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395607440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395641518Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395677040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395708605Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395737963Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395799761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395843188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395874408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395904381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395933636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395965927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395995431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396033716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396067448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396098841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396127871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396155969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396184510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396215668Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396250321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396280045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396307939Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396379697Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396424577Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396475305Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396510849Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396569471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396621386Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396656010Z" level=info msg="NRI interface is disabled by configuration."
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396883316Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396972499Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.397031244Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.397069101Z" level=info msg="containerd successfully booted in 0.024677s"
	May 05 21:22:42 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:42.379929944Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 05 21:22:42 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:42.413119848Z" level=info msg="Loading containers: start."
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.663705690Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.700545709Z" level=info msg="Loading containers: done."
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.707501270Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.707669278Z" level=info msg="Daemon has completed initialization"
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.725886686Z" level=info msg="API listen on [::]:2376"
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.725971765Z" level=info msg="API listen on /var/run/docker.sock"
	May 05 21:22:43 ha-671000-m03 systemd[1]: Started Docker Application Container Engine.
	May 05 21:22:44 ha-671000-m03 systemd[1]: Stopping Docker Application Container Engine...
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.833114404Z" level=info msg="Processing signal 'terminated'"
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834199869Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834666188Z" level=info msg="Daemon shutdown complete"
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834695637Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834707874Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 05 21:22:45 ha-671000-m03 systemd[1]: docker.service: Deactivated successfully.
	May 05 21:22:45 ha-671000-m03 systemd[1]: Stopped Docker Application Container Engine.
	May 05 21:22:45 ha-671000-m03 systemd[1]: Starting Docker Application Container Engine...
	May 05 21:22:45 ha-671000-m03 dockerd[1073]: time="2024-05-05T21:22:45.887265470Z" level=info msg="Starting up"
	May 05 21:23:45 ha-671000-m03 dockerd[1073]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 05 21:23:45 ha-671000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 05 21:23:45 ha-671000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 05 21:23:45 ha-671000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0505 14:23:45.791332   56262 out.go:239] * 
	W0505 14:23:45.791963   56262 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:23:45.854203   56262 out.go:177] 
	
	
	==> Docker <==
	May 05 21:22:04 ha-671000 dockerd[1136]: time="2024-05-05T21:22:04.237377141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:22:05 ha-671000 dockerd[1136]: time="2024-05-05T21:22:05.263750494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 21:22:05 ha-671000 dockerd[1136]: time="2024-05-05T21:22:05.263806421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 21:22:05 ha-671000 dockerd[1136]: time="2024-05-05T21:22:05.263818283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:22:05 ha-671000 dockerd[1136]: time="2024-05-05T21:22:05.263888173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:22:05 ha-671000 dockerd[1136]: time="2024-05-05T21:22:05.265011165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 21:22:05 ha-671000 dockerd[1136]: time="2024-05-05T21:22:05.265198272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 21:22:05 ha-671000 dockerd[1136]: time="2024-05-05T21:22:05.265235383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:22:05 ha-671000 dockerd[1136]: time="2024-05-05T21:22:05.265331468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:22:09 ha-671000 dockerd[1136]: time="2024-05-05T21:22:09.280534299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 21:22:09 ha-671000 dockerd[1136]: time="2024-05-05T21:22:09.280666251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 21:22:09 ha-671000 dockerd[1136]: time="2024-05-05T21:22:09.280681083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:22:09 ha-671000 dockerd[1136]: time="2024-05-05T21:22:09.284884558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:22:34 ha-671000 dockerd[1136]: time="2024-05-05T21:22:34.248610291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 21:22:34 ha-671000 dockerd[1136]: time="2024-05-05T21:22:34.248876754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 21:22:34 ha-671000 dockerd[1136]: time="2024-05-05T21:22:34.248900713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:22:34 ha-671000 dockerd[1136]: time="2024-05-05T21:22:34.249023707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:22:34 ha-671000 dockerd[1130]: time="2024-05-05T21:22:34.316945093Z" level=info msg="ignoring event" container=0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 21:22:34 ha-671000 dockerd[1136]: time="2024-05-05T21:22:34.317591194Z" level=info msg="shim disconnected" id=0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377 namespace=moby
	May 05 21:22:34 ha-671000 dockerd[1136]: time="2024-05-05T21:22:34.317738677Z" level=warning msg="cleaning up after shim disconnected" id=0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377 namespace=moby
	May 05 21:22:34 ha-671000 dockerd[1136]: time="2024-05-05T21:22:34.317783286Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 21:22:36 ha-671000 dockerd[1136]: time="2024-05-05T21:22:36.235098682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 21:22:36 ha-671000 dockerd[1136]: time="2024-05-05T21:22:36.235605348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 21:22:36 ha-671000 dockerd[1136]: time="2024-05-05T21:22:36.235714710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:22:36 ha-671000 dockerd[1136]: time="2024-05-05T21:22:36.235995155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4e72d733bb177       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   17013aecf8e89       coredns-7db6d8ff4d-hqtd2
	a5ba9a7a24b6f       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   5a876c8ef945c       coredns-7db6d8ff4d-kjf54
	c048dc81e6392       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   382155dbcfe93       kindnet-zvz9x
	76503e51b3afa       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   8637a9efa2c11       busybox-fc5497c4f-lfn9v
	7001a9c78d0af       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   f930d07fb2b00       kube-proxy-kppdj
	0883553982a24       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   cca445b0e122c       storage-provisioner
	64c952108db1f       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   2                   66419f8520fde       kube-controller-manager-ha-671000
	0faa6b8c33ebd       c42f13656d0b2                                                                                         2 minutes ago        Running             kube-apiserver            1                   70fab261c2b17       kube-apiserver-ha-671000
	0c29a1524fb04       22aaebb38f4a9                                                                                         2 minutes ago        Running             kube-vip                  0                   2c44ab6fb1b45       kube-vip-ha-671000
	d51ddba3901bd       c7aad43836fa5                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   66419f8520fde       kube-controller-manager-ha-671000
	06468c7f97645       3861cfcd7c04c                                                                                         2 minutes ago        Running             etcd                      1                   7eb485f57bef9       etcd-ha-671000
	09b069cddaf09       259c8277fcbbc                                                                                         2 minutes ago        Running             kube-scheduler            1                   0b3f9b67d960c       kube-scheduler-ha-671000
	d08c19fcd330c       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago        Exited              busybox                   0                   0a3a1177976eb       busybox-fc5497c4f-lfn9v
	aa3ff28b7c901       cbb01a7bd410d                                                                                         7 minutes ago        Exited              coredns                   0                   803b42dbd6068       coredns-7db6d8ff4d-kjf54
	bfe23d4afc231       cbb01a7bd410d                                                                                         7 minutes ago        Exited              coredns                   0                   26bf6869329a0       coredns-7db6d8ff4d-hqtd2
	1a1434eaae36d       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              8 minutes ago        Exited              kindnet-cni               0                   61be6d7331d2d       kindnet-zvz9x
	2de2ad908033c       a0bf559e280cf                                                                                         8 minutes ago        Exited              kube-proxy                0                   ce98653ecf0b5       kube-proxy-kppdj
	5254e6584697c       3861cfcd7c04c                                                                                         8 minutes ago        Exited              etcd                      0                   6c18606ff8a34       etcd-ha-671000
	52585f49ef66d       c42f13656d0b2                                                                                         8 minutes ago        Exited              kube-apiserver            0                   157e6496c96d6       kube-apiserver-ha-671000
	0f13fc419c3a3       259c8277fcbbc                                                                                         8 minutes ago        Exited              kube-scheduler            0                   20d7fc1ca35c2       kube-scheduler-ha-671000
	
	
	==> coredns [4e72d733bb17] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60404 - 16395 "HINFO IN 7673949606304789129.6924752665992071371. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01220844s
	
	
	==> coredns [a5ba9a7a24b6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54698 - 36003 "HINFO IN 1073736587953336830.7574535335510144074. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015279179s
	
	
	==> coredns [aa3ff28b7c90] <==
	[INFO] 10.244.0.4:55179 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00060962s
	[INFO] 10.244.0.4:54761 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000032941s
	[INFO] 10.244.0.4:53596 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000034902s
	[INFO] 10.244.1.2:52057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008017s
	[INFO] 10.244.1.2:37246 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000039116s
	[INFO] 10.244.1.2:41412 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078072s
	[INFO] 10.244.1.2:35969 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000042719s
	[INFO] 10.244.1.2:60012 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000495345s
	[INFO] 10.244.1.2:57444 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000068087s
	[INFO] 10.244.1.2:56681 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071523s
	[INFO] 10.244.1.2:51095 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000038807s
	[INFO] 10.244.2.2:39666 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012061s
	[INFO] 10.244.0.4:36229 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075354s
	[INFO] 10.244.0.4:36052 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000059981s
	[INFO] 10.244.0.4:45966 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005648s
	[INFO] 10.244.0.4:40793 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010383s
	[INFO] 10.244.1.2:39020 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075539s
	[INFO] 10.244.1.2:57719 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064383s
	[INFO] 10.244.2.2:46470 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097542s
	[INFO] 10.244.2.2:54394 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123552s
	[INFO] 10.244.2.2:60319 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000056346s
	[INFO] 10.244.1.2:32801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087202s
	[INFO] 10.244.1.2:39594 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089023s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bfe23d4afc23] <==
	[INFO] 10.244.2.2:60822 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.010749854s
	[INFO] 10.244.0.4:46715 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116633s
	[INFO] 10.244.0.4:36578 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000057682s
	[INFO] 10.244.2.2:49239 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011646073s
	[INFO] 10.244.2.2:60414 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097s
	[INFO] 10.244.2.2:33426 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011533001s
	[INFO] 10.244.2.2:51459 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091142s
	[INFO] 10.244.0.4:52044 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000037728s
	[INFO] 10.244.0.4:58536 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000026924s
	[INFO] 10.244.0.4:60528 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000030891s
	[INFO] 10.244.0.4:46083 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057358s
	[INFO] 10.244.2.2:35752 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076258s
	[INFO] 10.244.2.2:52942 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063141s
	[INFO] 10.244.2.2:37055 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096791s
	[INFO] 10.244.1.2:52668 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008334s
	[INFO] 10.244.1.2:39089 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160813s
	[INFO] 10.244.2.2:59653 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000092778s
	[INFO] 10.244.0.4:35085 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007747s
	[INFO] 10.244.0.4:32964 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000073391s
	[INFO] 10.244.0.4:44760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077879s
	[INFO] 10.244.0.4:37758 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000071268s
	[INFO] 10.244.1.2:55625 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000061815s
	[INFO] 10.244.1.2:50908 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000064514s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-671000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-671000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_05T14_15_29_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:15:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:23:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:21:46 +0000   Sun, 05 May 2024 21:15:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:21:46 +0000   Sun, 05 May 2024 21:15:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:21:46 +0000   Sun, 05 May 2024 21:15:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:21:46 +0000   Sun, 05 May 2024 21:15:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.51
	  Hostname:    ha-671000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 3721a595f38c41b8bbd3cdb36f05098b
	  System UUID:                93894e2d-0000-0000-8cc9-aa1a138ddf96
	  Boot ID:                    844f38c6-034c-4659-bd02-e667c7e866d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lfn9v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  kube-system                 coredns-7db6d8ff4d-hqtd2             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m7s
	  kube-system                 coredns-7db6d8ff4d-kjf54             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m7s
	  kube-system                 etcd-ha-671000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m22s
	  kube-system                 kindnet-zvz9x                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m7s
	  kube-system                 kube-apiserver-ha-671000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-controller-manager-ha-671000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-proxy-kppdj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	  kube-system                 kube-scheduler-ha-671000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-vip-ha-671000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 102s                   kube-proxy       
	  Normal  Starting                 8m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  8m27s (x8 over 8m27s)  kubelet          Node ha-671000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m27s (x7 over 8m27s)  kubelet          Node ha-671000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m27s (x8 over 8m27s)  kubelet          Node ha-671000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m27s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m20s                  kubelet          Node ha-671000 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m20s                  kubelet          Node ha-671000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m20s                  kubelet          Node ha-671000 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           8m8s                   node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	  Normal  NodeReady                7m58s                  kubelet          Node ha-671000 status is now: NodeReady
	  Normal  RegisteredNode           6m54s                  node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	  Normal  RegisteredNode           5m44s                  node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	  Normal  RegisteredNode           3m29s                  node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	  Normal  Starting                 2m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m38s (x8 over 2m38s)  kubelet          Node ha-671000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m38s (x8 over 2m38s)  kubelet          Node ha-671000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m38s (x7 over 2m38s)  kubelet          Node ha-671000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           118s                   node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	  Normal  RegisteredNode           108s                   node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	
	
	Name:               ha-671000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-671000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T14_16_38_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:16:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:23:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:21:38 +0000   Sun, 05 May 2024 21:16:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:21:38 +0000   Sun, 05 May 2024 21:16:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:21:38 +0000   Sun, 05 May 2024 21:16:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:21:38 +0000   Sun, 05 May 2024 21:16:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.52
	  Hostname:    ha-671000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd0c52403e6948f895e68f7307e07d3c
	  System UUID:                294b4d68-0000-0000-b3f3-54381951a5e8
	  Boot ID:                    afe03ef7-7b17-481f-b318-67efdc00c911
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q27t4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  kube-system                 etcd-ha-671000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m9s
	  kube-system                 kindnet-kn94d                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m11s
	  kube-system                 kube-apiserver-ha-671000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 kube-controller-manager-ha-671000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 kube-proxy-5jwqs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	  kube-system                 kube-scheduler-ha-671000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 kube-vip-ha-671000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m7s                   kube-proxy       
	  Normal   Starting                 113s                   kube-proxy       
	  Normal   Starting                 3m42s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  7m11s (x8 over 7m11s)  kubelet          Node ha-671000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m11s (x8 over 7m11s)  kubelet          Node ha-671000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m11s (x7 over 7m11s)  kubelet          Node ha-671000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  7m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           7m8s                   node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	  Normal   RegisteredNode           6m54s                  node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	  Normal   RegisteredNode           5m44s                  node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	  Normal   NodeAllocatableEnforced  3m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m45s                  kubelet          Node ha-671000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m45s                  kubelet          Node ha-671000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m45s                  kubelet          Node ha-671000-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 3m45s                  kubelet          Node ha-671000-m02 has been rebooted, boot id: 4c58d033-04b8-4c15-8fdc-920ae431b3e3
	  Normal   Starting                 3m45s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           3m29s                  node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	  Normal   Starting                 2m20s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m19s (x8 over 2m19s)  kubelet          Node ha-671000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  2m19s (x8 over 2m19s)  kubelet          Node ha-671000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m19s (x7 over 2m19s)  kubelet          Node ha-671000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           118s                   node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	  Normal   RegisteredNode           108s                   node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	
	
	Name:               ha-671000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-671000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T14_17_49_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:17:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:20:19 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 05 May 2024 21:18:16 +0000   Sun, 05 May 2024 21:22:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 05 May 2024 21:18:16 +0000   Sun, 05 May 2024 21:22:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 05 May 2024 21:18:16 +0000   Sun, 05 May 2024 21:22:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 05 May 2024 21:18:16 +0000   Sun, 05 May 2024 21:22:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.53
	  Hostname:    ha-671000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 57e667ca3d044ecd8738fa77dd77fa8b
	  System UUID:                be904905-0000-0000-ae38-2f481381ca7c
	  Boot ID:                    8a14d3dc-4069-4d68-a1d0-b7b11fe06e54
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kr2jr                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  kube-system                 etcd-ha-671000-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m59s
	  kube-system                 kindnet-cbt9x                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m1s
	  kube-system                 kube-apiserver-ha-671000-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	  kube-system                 kube-controller-manager-ha-671000-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	  kube-system                 kube-proxy-zwgd2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-scheduler-ha-671000-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	  kube-system                 kube-vip-ha-671000-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m1s (x8 over 6m1s)  kubelet          Node ha-671000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m1s (x8 over 6m1s)  kubelet          Node ha-671000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m1s (x7 over 6m1s)  kubelet          Node ha-671000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m59s                node-controller  Node ha-671000-m03 event: Registered Node ha-671000-m03 in Controller
	  Normal  RegisteredNode           5m58s                node-controller  Node ha-671000-m03 event: Registered Node ha-671000-m03 in Controller
	  Normal  RegisteredNode           5m44s                node-controller  Node ha-671000-m03 event: Registered Node ha-671000-m03 in Controller
	  Normal  RegisteredNode           3m29s                node-controller  Node ha-671000-m03 event: Registered Node ha-671000-m03 in Controller
	  Normal  RegisteredNode           118s                 node-controller  Node ha-671000-m03 event: Registered Node ha-671000-m03 in Controller
	  Normal  RegisteredNode           108s                 node-controller  Node ha-671000-m03 event: Registered Node ha-671000-m03 in Controller
	  Normal  NodeNotReady             78s                  node-controller  Node ha-671000-m03 status is now: NodeNotReady
	
	
	Name:               ha-671000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-671000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T14_18_38_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:18:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:20:20 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 05 May 2024 21:19:15 +0000   Sun, 05 May 2024 21:22:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 05 May 2024 21:19:15 +0000   Sun, 05 May 2024 21:22:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 05 May 2024 21:19:15 +0000   Sun, 05 May 2024 21:22:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 05 May 2024 21:19:15 +0000   Sun, 05 May 2024 21:22:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.54
	  Hostname:    ha-671000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4981d8834c947ca92647a836bff839f
	  System UUID:                8d0f44c8-0000-0000-aaa8-77d77d483dce
	  Boot ID:                    16c48acc-c76d-4b03-8b93-c113a1acb125
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ffg2p       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m9s
	  kube-system                 kube-proxy-b45s6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m                   kube-proxy       
	  Normal  NodeHasSufficientPID     5m9s (x2 over 5m9s)  kubelet          Node ha-671000-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m9s                 node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal  NodeAllocatableEnforced  5m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m9s (x2 over 5m9s)  kubelet          Node ha-671000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m9s (x2 over 5m9s)  kubelet          Node ha-671000-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           5m8s                 node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal  RegisteredNode           5m4s                 node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal  NodeReady                4m32s                kubelet          Node ha-671000-m04 status is now: NodeReady
	  Normal  RegisteredNode           3m29s                node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal  RegisteredNode           118s                 node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal  RegisteredNode           108s                 node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal  NodeNotReady             78s                  node-controller  Node ha-671000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.036177] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007984] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.371215] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006679] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.612826] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[May 5 21:21] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +2.610406] systemd-fstab-generator[479]: Ignoring "noauto" option for root device
	[  +0.095617] systemd-fstab-generator[491]: Ignoring "noauto" option for root device
	[  +1.314538] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.655682] systemd-fstab-generator[1058]: Ignoring "noauto" option for root device
	[  +0.256796] systemd-fstab-generator[1096]: Ignoring "noauto" option for root device
	[  +0.100506] systemd-fstab-generator[1108]: Ignoring "noauto" option for root device
	[  +0.111570] systemd-fstab-generator[1122]: Ignoring "noauto" option for root device
	[  +2.444375] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.102765] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.091262] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.136792] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.441863] systemd-fstab-generator[1481]: Ignoring "noauto" option for root device
	[  +6.939646] kauditd_printk_skb: 276 callbacks suppressed
	[ +21.981272] kauditd_printk_skb: 40 callbacks suppressed
	[May 5 21:22] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.342141] kauditd_printk_skb: 29 callbacks suppressed
	
	
	==> etcd [06468c7f9764] <==
	{"level":"warn","ts":"2024-05-05T21:23:21.591728Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:22.146429Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:22.146476Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:26.148455Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:26.148515Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:26.591902Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:26.591953Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:30.150682Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:30.150746Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:31.592757Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:31.592823Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:34.152847Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:34.152977Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:36.59348Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:36.593489Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:38.154487Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:38.154534Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:41.594251Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:41.59428Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:42.155735Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:42.155924Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:46.158028Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:46.158078Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:46.594975Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:46.595025Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	
	
	==> etcd [5254e6584697] <==
	{"level":"warn","ts":"2024-05-05T21:20:41.244715Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.517168037s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"","error":"context canceled"}
	2024/05/05 21:20:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-05-05T21:20:41.244728Z","caller":"traceutil/trace.go:171","msg":"trace[1070592193] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"7.517242865s","start":"2024-05-05T21:20:33.727481Z","end":"2024-05-05T21:20:41.244724Z","steps":["trace[1070592193] 'agreement among raft nodes before linearized reading'  (duration: 7.517229047s)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:20:41.244739Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:20:33.727472Z","time spent":"7.517264459s","remote":"127.0.0.1:52468","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	2024/05/05 21:20:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-05T21:20:41.318319Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.51:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-05T21:20:41.318441Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.51:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-05T21:20:41.318529Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"1792221d12ca7193","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-05T21:20:41.318575Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:20:41.318613Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:20:41.318632Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:20:41.318702Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1792221d12ca7193","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:20:41.318726Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1792221d12ca7193","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:20:41.318811Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1792221d12ca7193","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:20:41.318844Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:20:41.318852Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:20:41.318878Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:20:41.318893Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:20:41.319101Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:20:41.319165Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:20:41.319193Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:20:41.319239Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:20:41.320696Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.51:2380"}
	{"level":"info","ts":"2024-05-05T21:20:41.320808Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.51:2380"}
	{"level":"info","ts":"2024-05-05T21:20:41.320835Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-671000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.51:2380"],"advertise-client-urls":["https://192.169.0.51:2379"]}
	
	
	==> kernel <==
	 21:23:48 up 2 min,  0 users,  load average: 0.38, 0.30, 0.12
	Linux ha-671000 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1a1434eaae36] <==
	I0505 21:19:55.731657       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:20:05.736429       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:20:05.736525       1 main.go:227] handling current node
	I0505 21:20:05.736552       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:20:05.736689       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:20:05.736923       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0505 21:20:05.736977       1 main.go:250] Node ha-671000-m03 has CIDR [10.244.2.0/24] 
	I0505 21:20:05.737155       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:20:05.737283       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:20:15.745695       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:20:15.745995       1 main.go:227] handling current node
	I0505 21:20:15.746046       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:20:15.746126       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:20:15.746307       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0505 21:20:15.746355       1 main.go:250] Node ha-671000-m03 has CIDR [10.244.2.0/24] 
	I0505 21:20:15.746485       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:20:15.746532       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:20:25.759299       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:20:25.759513       1 main.go:227] handling current node
	I0505 21:20:25.759563       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:20:25.759608       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:20:25.759700       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0505 21:20:25.759814       1 main.go:250] Node ha-671000-m03 has CIDR [10.244.2.0/24] 
	I0505 21:20:25.759945       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:20:25.759992       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c048dc81e639] <==
	I0505 21:23:10.599027       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:23:20.608994       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:23:20.609285       1 main.go:227] handling current node
	I0505 21:23:20.609469       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:23:20.609541       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:23:20.609681       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0505 21:23:20.609741       1 main.go:250] Node ha-671000-m03 has CIDR [10.244.2.0/24] 
	I0505 21:23:20.610023       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:23:20.610110       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:23:30.618901       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:23:30.619021       1 main.go:227] handling current node
	I0505 21:23:30.619044       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:23:30.619070       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:23:30.619227       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0505 21:23:30.619254       1 main.go:250] Node ha-671000-m03 has CIDR [10.244.2.0/24] 
	I0505 21:23:30.619356       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:23:30.619383       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:23:40.633008       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:23:40.633100       1 main.go:227] handling current node
	I0505 21:23:40.633177       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:23:40.633333       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:23:40.633697       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0505 21:23:40.633810       1 main.go:250] Node ha-671000-m03 has CIDR [10.244.2.0/24] 
	I0505 21:23:40.634043       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:23:40.634273       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0faa6b8c33eb] <==
	I0505 21:21:37.291123       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0505 21:21:37.291359       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0505 21:21:37.274777       1 aggregator.go:163] waiting for initial CRD sync...
	I0505 21:21:37.375644       1 shared_informer.go:320] Caches are synced for configmaps
	I0505 21:21:37.375925       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0505 21:21:37.375971       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0505 21:21:37.377200       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0505 21:21:37.378817       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0505 21:21:37.381581       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0505 21:21:37.377409       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0505 21:21:37.381892       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0505 21:21:37.382046       1 aggregator.go:165] initial CRD sync complete...
	I0505 21:21:37.382198       1 autoregister_controller.go:141] Starting autoregister controller
	I0505 21:21:37.382286       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0505 21:21:37.382435       1 cache.go:39] Caches are synced for autoregister controller
	W0505 21:21:37.393655       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.53]
	I0505 21:21:37.416822       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0505 21:21:37.416834       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0505 21:21:37.417065       1 policy_source.go:224] refreshing policies
	I0505 21:21:37.456433       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0505 21:21:37.495739       1 controller.go:615] quota admission added evaluator for: endpoints
	I0505 21:21:37.501072       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0505 21:21:37.503150       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0505 21:21:38.282464       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0505 21:21:38.614946       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.51 192.169.0.52 192.169.0.53]
	
	
	==> kube-apiserver [52585f49ef66] <==
	W0505 21:20:41.280549       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280601       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280629       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280682       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280709       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280761       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280789       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280843       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280871       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280923       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280951       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.281002       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.281029       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.281054       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0505 21:20:41.281265       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 1.492664ms, panicked: false, err: rpc error: code = Unknown desc = malformed header: missing HTTP content-type, panic-reason: <nil>
	W0505 21:20:41.284566       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.284618       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.284660       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.284759       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.285529       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.285564       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.285594       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.285900       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0505 21:20:41.286124       1 timeout.go:142] post-timeout activity - time-elapsed: 149.222533ms, GET "/readyz" result: <nil>
	I0505 21:20:41.286844       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [64c952108db1] <==
	I0505 21:21:59.982133       1 shared_informer.go:320] Caches are synced for disruption
	I0505 21:22:00.000358       1 shared_informer.go:320] Caches are synced for deployment
	I0505 21:22:00.007804       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0505 21:22:00.024496       1 shared_informer.go:320] Caches are synced for resource quota
	I0505 21:22:00.035366       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0505 21:22:00.035542       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.697µs"
	I0505 21:22:00.035653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="137.077µs"
	I0505 21:22:00.070482       1 shared_informer.go:320] Caches are synced for resource quota
	I0505 21:22:00.445610       1 shared_informer.go:320] Caches are synced for garbage collector
	I0505 21:22:00.453488       1 shared_informer.go:320] Caches are synced for garbage collector
	I0505 21:22:00.453531       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0505 21:22:05.511091       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.295484ms"
	I0505 21:22:05.511370       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.644µs"
	I0505 21:22:21.210161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47µs"
	I0505 21:22:22.203561       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.395µs"
	I0505 21:22:29.671409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.983559ms"
	I0505 21:22:29.671803       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="344.603µs"
	I0505 21:22:34.895317       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.354µs"
	I0505 21:22:34.945918       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-qfwk6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-qfwk6\": the object has been modified; please apply your changes to the latest version and try again"
	I0505 21:22:34.946345       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"bea99034-e1b7-4a88-8a06-fbc74abeaaf9", APIVersion:"v1", ResourceVersion:"296", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-qfwk6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-qfwk6": the object has been modified; please apply your changes to the latest version and try again
	I0505 21:22:34.949671       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.865154ms"
	I0505 21:22:34.950019       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.905µs"
	I0505 21:22:36.927342       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="78.051µs"
	I0505 21:22:36.944792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.116942ms"
	I0505 21:22:36.945091       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.255µs"
	
	
	==> kube-controller-manager [d51ddba3901b] <==
	I0505 21:21:17.233998       1 serving.go:380] Generated self-signed cert in-memory
	I0505 21:21:17.699254       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0505 21:21:17.699295       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:21:17.702300       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0505 21:21:17.704596       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0505 21:21:17.704681       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0505 21:21:17.704829       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0505 21:21:37.707829       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-n
amespaces-controller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [2de2ad908033] <==
	I0505 21:15:42.197467       1 server_linux.go:69] "Using iptables proxy"
	I0505 21:15:42.206342       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.51"]
	I0505 21:15:42.233495       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 21:15:42.233528       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 21:15:42.233540       1 server_linux.go:165] "Using iptables Proxier"
	I0505 21:15:42.235848       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 21:15:42.236234       1 server.go:872] "Version info" version="v1.30.0"
	I0505 21:15:42.236321       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:15:42.237244       1 config.go:101] "Starting endpoint slice config controller"
	I0505 21:15:42.237489       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 21:15:42.237528       1 config.go:192] "Starting service config controller"
	I0505 21:15:42.237533       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 21:15:42.237620       1 config.go:319] "Starting node config controller"
	I0505 21:15:42.237748       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0505 21:15:42.338371       1 shared_informer.go:320] Caches are synced for service config
	I0505 21:15:42.338453       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0505 21:15:42.338567       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7001a9c78d0a] <==
	I0505 21:22:05.427749       1 server_linux.go:69] "Using iptables proxy"
	I0505 21:22:05.441644       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.51"]
	I0505 21:22:05.545461       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 21:22:05.545682       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 21:22:05.545778       1 server_linux.go:165] "Using iptables Proxier"
	I0505 21:22:05.548756       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 21:22:05.549189       1 server.go:872] "Version info" version="v1.30.0"
	I0505 21:22:05.549278       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:22:05.551545       1 config.go:192] "Starting service config controller"
	I0505 21:22:05.551674       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 21:22:05.551761       1 config.go:101] "Starting endpoint slice config controller"
	I0505 21:22:05.551848       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 21:22:05.552969       1 config.go:319] "Starting node config controller"
	I0505 21:22:05.553109       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0505 21:22:05.652764       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0505 21:22:05.652801       1 shared_informer.go:320] Caches are synced for service config
	I0505 21:22:05.653231       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [09b069cddaf0] <==
	I0505 21:21:17.140666       1 serving.go:380] Generated self-signed cert in-memory
	W0505 21:21:27.959721       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.169.0.51:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0505 21:21:27.959770       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0505 21:21:27.959776       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0505 21:21:37.325220       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0505 21:21:37.325291       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:21:37.336314       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0505 21:21:37.337352       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0505 21:21:37.337505       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0505 21:21:37.341283       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0505 21:21:37.438307       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [0f13fc419c3a] <==
	I0505 21:18:38.425370       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ffg2p" node="ha-671000-m04"
	E0505 21:18:38.428127       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tgdtz\": pod kube-proxy-tgdtz is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tgdtz" node="ha-671000-m04"
	E0505 21:18:38.428397       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f5f9b9e4-4771-49af-a1e4-37910d8267a4(kube-system/kube-proxy-tgdtz) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tgdtz"
	E0505 21:18:38.428585       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tgdtz\": pod kube-proxy-tgdtz is already assigned to node \"ha-671000-m04\"" pod="kube-system/kube-proxy-tgdtz"
	I0505 21:18:38.428695       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tgdtz" node="ha-671000-m04"
	E0505 21:18:38.442949       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-66l5l\": pod kindnet-66l5l is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-66l5l" node="ha-671000-m04"
	E0505 21:18:38.443283       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4f688ff7-efff-4775-9a88-d954e81852f5(kube-system/kindnet-66l5l) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-66l5l"
	E0505 21:18:38.443527       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-66l5l\": pod kindnet-66l5l is already assigned to node \"ha-671000-m04\"" pod="kube-system/kindnet-66l5l"
	I0505 21:18:38.443685       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-66l5l" node="ha-671000-m04"
	E0505 21:18:38.443578       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xvf68\": pod kube-proxy-xvf68 is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xvf68" node="ha-671000-m04"
	E0505 21:18:38.444183       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 24a52ab7-73e5-4d91-810b-a2260dae577f(kube-system/kube-proxy-xvf68) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-xvf68"
	E0505 21:18:38.444289       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xvf68\": pod kube-proxy-xvf68 is already assigned to node \"ha-671000-m04\"" pod="kube-system/kube-proxy-xvf68"
	I0505 21:18:38.444408       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-xvf68" node="ha-671000-m04"
	E0505 21:18:38.489548       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-sbspd\": pod kindnet-sbspd is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-sbspd" node="ha-671000-m04"
	E0505 21:18:38.489803       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod afb510c4-ddf4-4844-bdf5-80343510ecb8(kube-system/kindnet-sbspd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-sbspd"
	E0505 21:18:38.490102       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-sbspd\": pod kindnet-sbspd is already assigned to node \"ha-671000-m04\"" pod="kube-system/kindnet-sbspd"
	I0505 21:18:38.490296       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-sbspd" node="ha-671000-m04"
	E0505 21:18:38.499960       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rldf7\": pod kube-proxy-rldf7 is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rldf7" node="ha-671000-m04"
	E0505 21:18:38.500590       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f733f40c-9915-44e5-8f24-9f4101633739(kube-system/kube-proxy-rldf7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-rldf7"
	E0505 21:18:38.501561       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rldf7\": pod kube-proxy-rldf7 is already assigned to node \"ha-671000-m04\"" pod="kube-system/kube-proxy-rldf7"
	I0505 21:18:38.501767       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rldf7" node="ha-671000-m04"
	E0505 21:18:40.483901       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fntvj\": pod kube-proxy-fntvj is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fntvj" node="ha-671000-m04"
	E0505 21:18:40.483990       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fntvj\": pod kube-proxy-fntvj is already assigned to node \"ha-671000-m04\"" pod="kube-system/kube-proxy-fntvj"
	I0505 21:18:40.484875       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fntvj" node="ha-671000-m04"
	E0505 21:20:41.266642       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 05 21:22:09 ha-671000 kubelet[1488]: I0505 21:22:09.221758    1488 scope.go:117] "RemoveContainer" containerID="f51438bee6679e498856deddc1a03d6233f30f95098fa5a3ec5c95988f53adbd"
	May 05 21:22:21 ha-671000 kubelet[1488]: I0505 21:22:21.192016    1488 scope.go:117] "RemoveContainer" containerID="aa3ff28b7c9017843d8d888a429ee706bd6460febccb79e8787320e99efbdfa4"
	May 05 21:22:21 ha-671000 kubelet[1488]: E0505 21:22:21.192254    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-7db6d8ff4d-kjf54_kube-system(c780145e-9d82-4451-94e8-dee09a63eadb)\"" pod="kube-system/coredns-7db6d8ff4d-kjf54" podUID="c780145e-9d82-4451-94e8-dee09a63eadb"
	May 05 21:22:22 ha-671000 kubelet[1488]: I0505 21:22:22.192271    1488 scope.go:117] "RemoveContainer" containerID="bfe23d4afc2313a26ae10b34970e899d74fe1e0f1c01bf9df2058c578bac6bf1"
	May 05 21:22:22 ha-671000 kubelet[1488]: E0505 21:22:22.192522    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-7db6d8ff4d-hqtd2_kube-system(e76b43f2-8189-4e5d-adc3-ced554e9ee07)\"" pod="kube-system/coredns-7db6d8ff4d-hqtd2" podUID="e76b43f2-8189-4e5d-adc3-ced554e9ee07"
	May 05 21:22:34 ha-671000 kubelet[1488]: I0505 21:22:34.191629    1488 scope.go:117] "RemoveContainer" containerID="aa3ff28b7c9017843d8d888a429ee706bd6460febccb79e8787320e99efbdfa4"
	May 05 21:22:34 ha-671000 kubelet[1488]: I0505 21:22:34.865379    1488 scope.go:117] "RemoveContainer" containerID="797ed8f77f01f6ba02573542d48c7a31705a8fe5b3efed78400f7de2a56d9358"
	May 05 21:22:34 ha-671000 kubelet[1488]: I0505 21:22:34.865674    1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
	May 05 21:22:34 ha-671000 kubelet[1488]: E0505 21:22:34.865777    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
	May 05 21:22:36 ha-671000 kubelet[1488]: I0505 21:22:36.192222    1488 scope.go:117] "RemoveContainer" containerID="bfe23d4afc2313a26ae10b34970e899d74fe1e0f1c01bf9df2058c578bac6bf1"
	May 05 21:22:49 ha-671000 kubelet[1488]: I0505 21:22:49.192583    1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
	May 05 21:22:49 ha-671000 kubelet[1488]: E0505 21:22:49.193087    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
	May 05 21:23:02 ha-671000 kubelet[1488]: I0505 21:23:02.191713    1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
	May 05 21:23:02 ha-671000 kubelet[1488]: E0505 21:23:02.192199    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
	May 05 21:23:09 ha-671000 kubelet[1488]: E0505 21:23:09.208918    1488 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:23:09 ha-671000 kubelet[1488]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:23:09 ha-671000 kubelet[1488]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:23:09 ha-671000 kubelet[1488]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:23:09 ha-671000 kubelet[1488]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:23:14 ha-671000 kubelet[1488]: I0505 21:23:14.191788    1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
	May 05 21:23:14 ha-671000 kubelet[1488]: E0505 21:23:14.192304    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
	May 05 21:23:29 ha-671000 kubelet[1488]: I0505 21:23:29.193869    1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
	May 05 21:23:29 ha-671000 kubelet[1488]: E0505 21:23:29.194441    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
	May 05 21:23:40 ha-671000 kubelet[1488]: I0505 21:23:40.191896    1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
	May 05 21:23:40 ha-671000 kubelet[1488]: E0505 21:23:40.192265    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-671000 -n ha-671000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-671000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (208.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 node delete m03 -v=7 --alsologtostderr
E0505 14:23:51.534539   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 14:23:54.550020   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-671000 node delete m03 -v=7 --alsologtostderr: (9.166294613s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-671000 status -v=7 --alsologtostderr: exit status 7 (306.719867ms)

                                                
                                                
-- stdout --
	ha-671000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:23:58.959423   56356 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:23:58.959748   56356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:23:58.959753   56356 out.go:304] Setting ErrFile to fd 2...
	I0505 14:23:58.959757   56356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:23:58.959955   56356 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
	I0505 14:23:58.960168   56356 out.go:298] Setting JSON to false
	I0505 14:23:58.960193   56356 mustload.go:65] Loading cluster: ha-671000
	I0505 14:23:58.960228   56356 notify.go:220] Checking for updates...
	I0505 14:23:58.960564   56356 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:23:58.960584   56356 status.go:255] checking status of ha-671000 ...
	I0505 14:23:58.961137   56356 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:23:58.961199   56356 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:23:58.970516   56356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58008
	I0505 14:23:58.970854   56356 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:23:58.971325   56356 main.go:141] libmachine: Using API Version  1
	I0505 14:23:58.971343   56356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:23:58.971556   56356 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:23:58.971668   56356 main.go:141] libmachine: (ha-671000) Calling .GetState
	I0505 14:23:58.971757   56356 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:23:58.971844   56356 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 56275
	I0505 14:23:58.972854   56356 status.go:330] ha-671000 host status = "Running" (err=<nil>)
	I0505 14:23:58.972871   56356 host.go:66] Checking if "ha-671000" exists ...
	I0505 14:23:58.973120   56356 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:23:58.973143   56356 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:23:58.982166   56356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58010
	I0505 14:23:58.982526   56356 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:23:58.982849   56356 main.go:141] libmachine: Using API Version  1
	I0505 14:23:58.982859   56356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:23:58.983094   56356 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:23:58.983200   56356 main.go:141] libmachine: (ha-671000) Calling .GetIP
	I0505 14:23:58.983290   56356 host.go:66] Checking if "ha-671000" exists ...
	I0505 14:23:58.983559   56356 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:23:58.983588   56356 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:23:58.992410   56356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58012
	I0505 14:23:58.992724   56356 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:23:58.993112   56356 main.go:141] libmachine: Using API Version  1
	I0505 14:23:58.993151   56356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:23:58.993360   56356 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:23:58.993468   56356 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:23:58.993614   56356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 14:23:58.993632   56356 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:23:58.993707   56356 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:23:58.993792   56356 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:23:58.993878   56356 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:23:58.993963   56356 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:23:59.027427   56356 ssh_runner.go:195] Run: systemctl --version
	I0505 14:23:59.037027   56356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 14:23:59.051063   56356 kubeconfig.go:125] found "ha-671000" server: "https://192.169.0.254:8443"
	I0505 14:23:59.051088   56356 api_server.go:166] Checking apiserver status ...
	I0505 14:23:59.051125   56356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:23:59.062694   56356 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1960/cgroup
	W0505 14:23:59.070197   56356 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1960/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 14:23:59.070244   56356 ssh_runner.go:195] Run: ls
	I0505 14:23:59.073613   56356 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0505 14:23:59.076698   56356 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0505 14:23:59.076710   56356 status.go:422] ha-671000 apiserver status = Running (err=<nil>)
	I0505 14:23:59.076720   56356 status.go:257] ha-671000 status: &{Name:ha-671000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 14:23:59.076731   56356 status.go:255] checking status of ha-671000-m02 ...
	I0505 14:23:59.076991   56356 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:23:59.077012   56356 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:23:59.085964   56356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58016
	I0505 14:23:59.086328   56356 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:23:59.086673   56356 main.go:141] libmachine: Using API Version  1
	I0505 14:23:59.086686   56356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:23:59.086932   56356 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:23:59.087045   56356 main.go:141] libmachine: (ha-671000-m02) Calling .GetState
	I0505 14:23:59.087122   56356 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:23:59.087231   56356 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid from json: 56285
	I0505 14:23:59.088226   56356 status.go:330] ha-671000-m02 host status = "Running" (err=<nil>)
	I0505 14:23:59.088235   56356 host.go:66] Checking if "ha-671000-m02" exists ...
	I0505 14:23:59.088492   56356 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:23:59.088515   56356 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:23:59.097352   56356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58018
	I0505 14:23:59.097734   56356 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:23:59.098099   56356 main.go:141] libmachine: Using API Version  1
	I0505 14:23:59.098113   56356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:23:59.098311   56356 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:23:59.098429   56356 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
	I0505 14:23:59.098523   56356 host.go:66] Checking if "ha-671000-m02" exists ...
	I0505 14:23:59.098779   56356 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:23:59.098809   56356 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:23:59.107434   56356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58020
	I0505 14:23:59.107763   56356 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:23:59.108107   56356 main.go:141] libmachine: Using API Version  1
	I0505 14:23:59.108125   56356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:23:59.108338   56356 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:23:59.108452   56356 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:23:59.108584   56356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 14:23:59.108602   56356 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:23:59.108685   56356 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:23:59.108763   56356 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:23:59.108842   56356 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:23:59.108923   56356 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	I0505 14:23:59.147391   56356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 14:23:59.159223   56356 kubeconfig.go:125] found "ha-671000" server: "https://192.169.0.254:8443"
	I0505 14:23:59.159237   56356 api_server.go:166] Checking apiserver status ...
	I0505 14:23:59.159272   56356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:23:59.170595   56356 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2073/cgroup
	W0505 14:23:59.178918   56356 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2073/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 14:23:59.178960   56356 ssh_runner.go:195] Run: ls
	I0505 14:23:59.182481   56356 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0505 14:23:59.185866   56356 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0505 14:23:59.185879   56356 status.go:422] ha-671000-m02 apiserver status = Running (err=<nil>)
	I0505 14:23:59.185887   56356 status.go:257] ha-671000-m02 status: &{Name:ha-671000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 14:23:59.185898   56356 status.go:255] checking status of ha-671000-m04 ...
	I0505 14:23:59.186168   56356 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:23:59.186190   56356 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:23:59.194768   56356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58024
	I0505 14:23:59.195113   56356 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:23:59.195430   56356 main.go:141] libmachine: Using API Version  1
	I0505 14:23:59.195440   56356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:23:59.195657   56356 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:23:59.195763   56356 main.go:141] libmachine: (ha-671000-m04) Calling .GetState
	I0505 14:23:59.195843   56356 main.go:141] libmachine: (ha-671000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:23:59.195954   56356 main.go:141] libmachine: (ha-671000-m04) DBG | hyperkit pid from json: 55847
	I0505 14:23:59.196879   56356 main.go:141] libmachine: (ha-671000-m04) DBG | hyperkit pid 55847 missing from process table
	I0505 14:23:59.196912   56356 status.go:330] ha-671000-m04 host status = "Stopped" (err=<nil>)
	I0505 14:23:59.196918   56356 status.go:343] host is not running, skipping remaining checks
	I0505 14:23:59.196925   56356 status.go:257] ha-671000-m04 status: &{Name:ha-671000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-671000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-671000 -n ha-671000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-671000 logs -n 25: (3.158673105s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n ha-671000-m02 sudo cat                                                                                      | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | /home/docker/cp-test_ha-671000-m03_ha-671000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-671000 cp ha-671000-m03:/home/docker/cp-test.txt                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04:/home/docker/cp-test_ha-671000-m03_ha-671000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n ha-671000-m04 sudo cat                                                                                      | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | /home/docker/cp-test_ha-671000-m03_ha-671000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-671000 cp testdata/cp-test.txt                                                                                            | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile4235302821/001/cp-test_ha-671000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000:/home/docker/cp-test_ha-671000-m04_ha-671000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n ha-671000 sudo cat                                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | /home/docker/cp-test_ha-671000-m04_ha-671000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m02:/home/docker/cp-test_ha-671000-m04_ha-671000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n ha-671000-m02 sudo cat                                                                                      | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | /home/docker/cp-test_ha-671000-m04_ha-671000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m03:/home/docker/cp-test_ha-671000-m04_ha-671000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n ha-671000-m03 sudo cat                                                                                      | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | /home/docker/cp-test_ha-671000-m04_ha-671000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-671000 node stop m02 -v=7                                                                                                 | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-671000 node start m02 -v=7                                                                                                | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:20 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-671000 -v=7                                                                                                       | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:20 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-671000 -v=7                                                                                                            | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:20 PDT | 05 May 24 14:20 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-671000 --wait=true -v=7                                                                                                | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:20 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-671000                                                                                                            | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:23 PDT |                     |
	| node    | ha-671000 node delete m03 -v=7                                                                                               | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:23 PDT | 05 May 24 14:23 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 14:20:48
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 14:20:48.965096   56262 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:20:48.965304   56262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:20:48.965309   56262 out.go:304] Setting ErrFile to fd 2...
	I0505 14:20:48.965313   56262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:20:48.965501   56262 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
	I0505 14:20:48.966984   56262 out.go:298] Setting JSON to false
	I0505 14:20:48.991851   56262 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":19219,"bootTime":1714924829,"procs":425,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0505 14:20:48.991949   56262 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:20:49.013239   56262 out.go:177] * [ha-671000] minikube v1.33.0 on Darwin 14.4.1
	I0505 14:20:49.055173   56262 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:20:49.055223   56262 notify.go:220] Checking for updates...
	I0505 14:20:49.077109   56262 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:20:49.097964   56262 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0505 14:20:49.119233   56262 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:20:49.139935   56262 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
	I0505 14:20:49.161146   56262 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:20:49.182881   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:20:49.183046   56262 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:20:49.183689   56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:20:49.183764   56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:20:49.193369   56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57871
	I0505 14:20:49.193700   56262 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:20:49.194120   56262 main.go:141] libmachine: Using API Version  1
	I0505 14:20:49.194134   56262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:20:49.194326   56262 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:20:49.194462   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:20:49.223183   56262 out.go:177] * Using the hyperkit driver based on existing profile
	I0505 14:20:49.265211   56262 start.go:297] selected driver: hyperkit
	I0505 14:20:49.265249   56262 start.go:901] validating driver "hyperkit" against &{Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:20:49.265473   56262 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:20:49.265691   56262 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:20:49.265889   56262 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18602-53665/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0505 14:20:49.275605   56262 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0505 14:20:49.280711   56262 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:20:49.280731   56262 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0505 14:20:49.284127   56262 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:20:49.284202   56262 cni.go:84] Creating CNI manager for ""
	I0505 14:20:49.284211   56262 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0505 14:20:49.284292   56262 start.go:340] cluster config:
	{Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false he
lm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:20:49.284394   56262 iso.go:125] acquiring lock: {Name:mk0da19ac8d2d553b5039d86a6857a5ca35625a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:20:49.326088   56262 out.go:177] * Starting "ha-671000" primary control-plane node in "ha-671000" cluster
	I0505 14:20:49.347002   56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:20:49.347074   56262 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0505 14:20:49.347098   56262 cache.go:56] Caching tarball of preloaded images
	I0505 14:20:49.347288   56262 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0505 14:20:49.347306   56262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:20:49.347472   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:20:49.348516   56262 start.go:360] acquireMachinesLock for ha-671000: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:20:49.348656   56262 start.go:364] duration metric: took 99.405µs to acquireMachinesLock for "ha-671000"
	I0505 14:20:49.348707   56262 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:20:49.348726   56262 fix.go:54] fixHost starting: 
	I0505 14:20:49.349125   56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:20:49.349160   56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:20:49.358523   56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57873
	I0505 14:20:49.358884   56262 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:20:49.359279   56262 main.go:141] libmachine: Using API Version  1
	I0505 14:20:49.359298   56262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:20:49.359523   56262 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:20:49.359669   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:20:49.359788   56262 main.go:141] libmachine: (ha-671000) Calling .GetState
	I0505 14:20:49.359894   56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:20:49.359963   56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 55694
	I0505 14:20:49.360866   56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid 55694 missing from process table
	I0505 14:20:49.360926   56262 fix.go:112] recreateIfNeeded on ha-671000: state=Stopped err=<nil>
	I0505 14:20:49.360950   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	W0505 14:20:49.361041   56262 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:20:49.402877   56262 out.go:177] * Restarting existing hyperkit VM for "ha-671000" ...
	I0505 14:20:49.423939   56262 main.go:141] libmachine: (ha-671000) Calling .Start
	I0505 14:20:49.424311   56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:20:49.424354   56262 main.go:141] libmachine: (ha-671000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid
	I0505 14:20:49.426302   56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid 55694 missing from process table
	I0505 14:20:49.426313   56262 main.go:141] libmachine: (ha-671000) DBG | pid 55694 is in state "Stopped"
	I0505 14:20:49.426344   56262 main.go:141] libmachine: (ha-671000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid...
	I0505 14:20:49.426771   56262 main.go:141] libmachine: (ha-671000) DBG | Using UUID 9389e317-b0a3-4e2d-8cc9-aa1a138ddf96
	I0505 14:20:49.551381   56262 main.go:141] libmachine: (ha-671000) DBG | Generated MAC 72:52:a3:7d:5c:d1
	I0505 14:20:49.551411   56262 main.go:141] libmachine: (ha-671000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
	I0505 14:20:49.551646   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9389e317-b0a3-4e2d-8cc9-aa1a138ddf96", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037f290)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:20:49.551692   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9389e317-b0a3-4e2d-8cc9-aa1a138ddf96", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037f290)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:20:49.551780   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9389e317-b0a3-4e2d-8cc9-aa1a138ddf96", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/ha-671000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
	I0505 14:20:49.551846   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9389e317-b0a3-4e2d-8cc9-aa1a138ddf96 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/ha-671000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
	I0505 14:20:49.551864   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0505 14:20:49.553184   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 DEBUG: hyperkit: Pid is 56275
	I0505 14:20:49.553639   56262 main.go:141] libmachine: (ha-671000) DBG | Attempt 0
	I0505 14:20:49.553663   56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:20:49.553735   56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 56275
	I0505 14:20:49.555494   56262 main.go:141] libmachine: (ha-671000) DBG | Searching for 72:52:a3:7d:5c:d1 in /var/db/dhcpd_leases ...
	I0505 14:20:49.555595   56262 main.go:141] libmachine: (ha-671000) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0505 14:20:49.555611   56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x6637f817}
	I0505 14:20:49.555629   56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394976}
	I0505 14:20:49.555648   56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x663948d2}
	I0505 14:20:49.555661   56262 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x66394853}
	I0505 14:20:49.555667   56262 main.go:141] libmachine: (ha-671000) DBG | Found match: 72:52:a3:7d:5c:d1
	I0505 14:20:49.555674   56262 main.go:141] libmachine: (ha-671000) DBG | IP: 192.169.0.51
	I0505 14:20:49.555696   56262 main.go:141] libmachine: (ha-671000) Calling .GetConfigRaw
	I0505 14:20:49.556342   56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
	I0505 14:20:49.556516   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:20:49.556975   56262 machine.go:94] provisionDockerMachine start ...
	I0505 14:20:49.556985   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:20:49.557119   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:20:49.557222   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:20:49.557336   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:20:49.557465   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:20:49.557602   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:20:49.557742   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:20:49.557972   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:20:49.557981   56262 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 14:20:49.561305   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0505 14:20:49.617858   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0505 14:20:49.618520   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:20:49.618541   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:20:49.618548   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:20:49.618556   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:49 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:20:50.003923   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0505 14:20:50.003954   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0505 14:20:50.118574   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:20:50.118591   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:20:50.118604   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:20:50.118620   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:20:50.119491   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0505 14:20:50.119502   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:50 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0505 14:20:55.386088   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0505 14:20:55.386105   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0505 14:20:55.386124   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0505 14:20:55.410129   56262 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:20:55 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0505 14:20:59.165992   56262 main.go:141] libmachine: Error dialing TCP: dial tcp 192.169.0.51:22: connect: connection refused
	I0505 14:21:02.226047   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 14:21:02.226063   56262 main.go:141] libmachine: (ha-671000) Calling .GetMachineName
	I0505 14:21:02.226198   56262 buildroot.go:166] provisioning hostname "ha-671000"
	I0505 14:21:02.226208   56262 main.go:141] libmachine: (ha-671000) Calling .GetMachineName
	I0505 14:21:02.226303   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.226392   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:02.226492   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.226582   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.226673   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:02.226801   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:02.226937   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:21:02.226945   56262 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671000 && echo "ha-671000" | sudo tee /etc/hostname
	I0505 14:21:02.297369   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000
	
	I0505 14:21:02.297395   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.297543   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:02.297643   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.297751   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.297848   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:02.297983   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:02.298121   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:21:02.298132   56262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 14:21:02.363709   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 14:21:02.363736   56262 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
	I0505 14:21:02.363757   56262 buildroot.go:174] setting up certificates
	I0505 14:21:02.363764   56262 provision.go:84] configureAuth start
	I0505 14:21:02.363771   56262 main.go:141] libmachine: (ha-671000) Calling .GetMachineName
	I0505 14:21:02.363911   56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
	I0505 14:21:02.364012   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.364108   56262 provision.go:143] copyHostCerts
	I0505 14:21:02.364139   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:21:02.364208   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
	I0505 14:21:02.364216   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:21:02.364363   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
	I0505 14:21:02.364576   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:21:02.364616   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
	I0505 14:21:02.364621   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:21:02.364702   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
	I0505 14:21:02.364858   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:21:02.364899   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
	I0505 14:21:02.364904   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:21:02.364979   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
	I0505 14:21:02.365133   56262 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000 san=[127.0.0.1 192.169.0.51 ha-671000 localhost minikube]
	I0505 14:21:02.566783   56262 provision.go:177] copyRemoteCerts
	I0505 14:21:02.566851   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 14:21:02.566867   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.567002   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:02.567081   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.567166   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:02.567249   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:21:02.603993   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 14:21:02.604064   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 14:21:02.623864   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 14:21:02.623931   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0505 14:21:02.642984   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 14:21:02.643054   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0505 14:21:02.662651   56262 provision.go:87] duration metric: took 298.874135ms to configureAuth
	I0505 14:21:02.662663   56262 buildroot.go:189] setting minikube options for container-runtime
	I0505 14:21:02.662832   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:21:02.662845   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:02.662976   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.663065   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:02.663164   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.663269   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.663357   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:02.663467   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:02.663594   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:21:02.663602   56262 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 14:21:02.721847   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 14:21:02.721864   56262 buildroot.go:70] root file system type: tmpfs
	I0505 14:21:02.721944   56262 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 14:21:02.721957   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.722094   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:02.722182   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.722290   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.722379   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:02.722504   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:02.722641   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:21:02.722685   56262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 14:21:02.791477   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 14:21:02.791499   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:02.791628   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:02.791713   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.791806   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:02.791895   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:02.792000   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:02.792138   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:21:02.792148   56262 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 14:21:04.463791   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0505 14:21:04.463805   56262 machine.go:97] duration metric: took 14.90688888s to provisionDockerMachine
	I0505 14:21:04.463814   56262 start.go:293] postStartSetup for "ha-671000" (driver="hyperkit")
	I0505 14:21:04.463821   56262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 14:21:04.463832   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:04.464011   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 14:21:04.464034   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:04.464144   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:04.464235   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:04.464343   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:04.464431   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:21:04.510297   56262 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 14:21:04.514333   56262 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 14:21:04.514346   56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
	I0505 14:21:04.514446   56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
	I0505 14:21:04.514637   56262 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
	I0505 14:21:04.514644   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
	I0505 14:21:04.514851   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 14:21:04.528097   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:21:04.557607   56262 start.go:296] duration metric: took 93.785206ms for postStartSetup
	I0505 14:21:04.557630   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:04.557802   56262 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0505 14:21:04.557815   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:04.557914   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:04.558026   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:04.558104   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:04.558180   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:21:04.595384   56262 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0505 14:21:04.595439   56262 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0505 14:21:04.627954   56262 fix.go:56] duration metric: took 15.279298664s for fixHost
	I0505 14:21:04.627978   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:04.628106   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:04.628210   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:04.628316   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:04.628400   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:04.628519   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:04.628664   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:21:04.628671   56262 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 14:21:04.687788   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944064.851392424
	
	I0505 14:21:04.687801   56262 fix.go:216] guest clock: 1714944064.851392424
	I0505 14:21:04.687806   56262 fix.go:229] Guest: 2024-05-05 14:21:04.851392424 -0700 PDT Remote: 2024-05-05 14:21:04.627967 -0700 PDT m=+15.708271847 (delta=223.425424ms)
	I0505 14:21:04.687822   56262 fix.go:200] guest clock delta is within tolerance: 223.425424ms
	I0505 14:21:04.687828   56262 start.go:83] releasing machines lock for "ha-671000", held for 15.339229169s
	I0505 14:21:04.687844   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:04.687975   56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
	I0505 14:21:04.688073   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:04.688362   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:04.688461   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:04.688537   56262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 14:21:04.688563   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:04.688585   56262 ssh_runner.go:195] Run: cat /version.json
	I0505 14:21:04.688594   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:04.688666   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:04.688681   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:04.688776   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:04.688794   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:04.688857   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:04.688870   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:04.688932   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:21:04.688951   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:21:04.773179   56262 ssh_runner.go:195] Run: systemctl --version
	I0505 14:21:04.778074   56262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 14:21:04.782225   56262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 14:21:04.782267   56262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 14:21:04.795505   56262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 14:21:04.795515   56262 start.go:494] detecting cgroup driver to use...
	I0505 14:21:04.795626   56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:21:04.813193   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0505 14:21:04.822043   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 14:21:04.830859   56262 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 14:21:04.830912   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 14:21:04.839650   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:21:04.848348   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 14:21:04.857332   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:21:04.866100   56262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 14:21:04.874955   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 14:21:04.883995   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 14:21:04.892686   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 14:21:04.901641   56262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 14:21:04.909531   56262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 14:21:04.917434   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:05.025345   56262 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 14:21:05.045401   56262 start.go:494] detecting cgroup driver to use...
	I0505 14:21:05.045483   56262 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 14:21:05.056970   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:21:05.067558   56262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 14:21:05.082472   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:21:05.093595   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:21:05.104660   56262 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0505 14:21:05.123434   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:21:05.136644   56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:21:05.151834   56262 ssh_runner.go:195] Run: which cri-dockerd
	I0505 14:21:05.154642   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 14:21:05.162375   56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 14:21:05.175761   56262 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 14:21:05.270844   56262 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 14:21:05.375810   56262 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 14:21:05.375883   56262 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 14:21:05.390245   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:05.495960   56262 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:21:07.797662   56262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.301692609s)
	I0505 14:21:07.797733   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0505 14:21:07.809357   56262 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0505 14:21:07.822066   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:21:07.832350   56262 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0505 14:21:07.930252   56262 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0505 14:21:08.029360   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:08.124190   56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0505 14:21:08.137986   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:21:08.149027   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:08.258895   56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0505 14:21:08.326102   56262 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0505 14:21:08.326177   56262 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0505 14:21:08.330736   56262 start.go:562] Will wait 60s for crictl version
	I0505 14:21:08.330787   56262 ssh_runner.go:195] Run: which crictl
	I0505 14:21:08.333926   56262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 14:21:08.360867   56262 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0505 14:21:08.360957   56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:21:08.380536   56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:21:08.444390   56262 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0505 14:21:08.444441   56262 main.go:141] libmachine: (ha-671000) Calling .GetIP
	I0505 14:21:08.444833   56262 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0505 14:21:08.449245   56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:21:08.459088   56262 kubeadm.go:877] updating cluster {Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:fal
se freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 14:21:08.459178   56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:21:08.459237   56262 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0505 14:21:08.472336   56262 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	ghcr.io/kube-vip/kube-vip:v0.7.1
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0505 14:21:08.472348   56262 docker.go:615] Images already preloaded, skipping extraction
	I0505 14:21:08.472419   56262 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0505 14:21:08.484264   56262 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	ghcr.io/kube-vip/kube-vip:v0.7.1
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0505 14:21:08.484284   56262 cache_images.go:84] Images are preloaded, skipping loading
	I0505 14:21:08.484299   56262 kubeadm.go:928] updating node { 192.169.0.51 8443 v1.30.0 docker true true} ...
	I0505 14:21:08.484375   56262 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-671000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 14:21:08.484439   56262 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0505 14:21:08.500967   56262 cni.go:84] Creating CNI manager for ""
	I0505 14:21:08.500979   56262 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0505 14:21:08.500990   56262 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 14:21:08.501005   56262 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.51 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-671000 NodeName:ha-671000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 14:21:08.501088   56262 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-671000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 14:21:08.501113   56262 kube-vip.go:111] generating kube-vip config ...
	I0505 14:21:08.501162   56262 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 14:21:08.513119   56262 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 14:21:08.513193   56262 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 14:21:08.513250   56262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 14:21:08.521487   56262 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 14:21:08.521531   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0505 14:21:08.528952   56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0505 14:21:08.542487   56262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 14:21:08.556157   56262 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0505 14:21:08.570110   56262 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0505 14:21:08.584111   56262 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0505 14:21:08.586992   56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:21:08.596597   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:08.710024   56262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:21:08.724251   56262 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000 for IP: 192.169.0.51
	I0505 14:21:08.724262   56262 certs.go:194] generating shared ca certs ...
	I0505 14:21:08.724272   56262 certs.go:226] acquiring lock for ca certs: {Name:mk4a4c4cb11dfd06f304e9c6007de9e5e149a466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:08.724457   56262 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key
	I0505 14:21:08.724528   56262 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key
	I0505 14:21:08.724539   56262 certs.go:256] generating profile certs ...
	I0505 14:21:08.724648   56262 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key
	I0505 14:21:08.724671   56262 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190
	I0505 14:21:08.724686   56262 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.51 192.169.0.52 192.169.0.53 192.169.0.254]
	I0505 14:21:08.826095   56262 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190 ...
	I0505 14:21:08.826111   56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190: {Name:mk26b58616f2e9bcce56069037dda85d1d8c350c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:08.826754   56262 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190 ...
	I0505 14:21:08.826765   56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190: {Name:mk7fc32008d240a4b7e6cb64bdeb1f596430582b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:08.826983   56262 certs.go:381] copying /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.e5ea8190 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt
	I0505 14:21:08.827192   56262 certs.go:385] copying /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e5ea8190 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key
	I0505 14:21:08.827434   56262 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key
	I0505 14:21:08.827443   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 14:21:08.827466   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 14:21:08.827487   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 14:21:08.827506   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 14:21:08.827523   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 14:21:08.827541   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 14:21:08.827559   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 14:21:08.827576   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 14:21:08.827667   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem (1338 bytes)
	W0505 14:21:08.827718   56262 certs.go:480] ignoring /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210_empty.pem, impossibly tiny 0 bytes
	I0505 14:21:08.827726   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 14:21:08.827758   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem (1078 bytes)
	I0505 14:21:08.827791   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem (1123 bytes)
	I0505 14:21:08.827822   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem (1679 bytes)
	I0505 14:21:08.827892   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:21:08.827924   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:08.827970   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem -> /usr/share/ca-certificates/54210.pem
	I0505 14:21:08.827988   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /usr/share/ca-certificates/542102.pem
	I0505 14:21:08.828425   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 14:21:08.851250   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 14:21:08.872963   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 14:21:08.895079   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 14:21:08.922893   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0505 14:21:08.953937   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 14:21:08.983911   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 14:21:09.023252   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 14:21:09.070795   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 14:21:09.113576   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem --> /usr/share/ca-certificates/54210.pem (1338 bytes)
	I0505 14:21:09.150037   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /usr/share/ca-certificates/542102.pem (1708 bytes)
	I0505 14:21:09.170089   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 14:21:09.184262   56262 ssh_runner.go:195] Run: openssl version
	I0505 14:21:09.188637   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/542102.pem && ln -fs /usr/share/ca-certificates/542102.pem /etc/ssl/certs/542102.pem"
	I0505 14:21:09.197186   56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/542102.pem
	I0505 14:21:09.200763   56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:08 /usr/share/ca-certificates/542102.pem
	I0505 14:21:09.200802   56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/542102.pem
	I0505 14:21:09.205113   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/542102.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 14:21:09.213846   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 14:21:09.222459   56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:09.225992   56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:09.226036   56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:09.230212   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 14:21:09.238744   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54210.pem && ln -fs /usr/share/ca-certificates/54210.pem /etc/ssl/certs/54210.pem"
	I0505 14:21:09.247131   56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54210.pem
	I0505 14:21:09.250641   56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:08 /usr/share/ca-certificates/54210.pem
	I0505 14:21:09.250684   56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54210.pem
	I0505 14:21:09.254933   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/54210.pem /etc/ssl/certs/51391683.0"
	I0505 14:21:09.263283   56262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 14:21:09.266913   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 14:21:09.271690   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 14:21:09.276202   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 14:21:09.280723   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 14:21:09.285120   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 14:21:09.289468   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 14:21:09.293767   56262 kubeadm.go:391] StartCluster: {Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false
freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:21:09.293893   56262 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0505 14:21:09.305167   56262 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0505 14:21:09.312937   56262 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0505 14:21:09.312947   56262 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0505 14:21:09.312965   56262 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0505 14:21:09.313010   56262 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0505 14:21:09.320777   56262 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0505 14:21:09.321098   56262 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-671000" does not appear in /Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:21:09.321183   56262 kubeconfig.go:62] /Users/jenkins/minikube-integration/18602-53665/kubeconfig needs updating (will repair): [kubeconfig missing "ha-671000" cluster setting kubeconfig missing "ha-671000" context setting]
	I0505 14:21:09.321347   56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/kubeconfig: {Name:mk07bec02cc3957a2a8800c4412eef88581455ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:09.321996   56262 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:21:09.322179   56262 kapi.go:59] client config for ha-671000: &rest.Config{Host:"https://192.169.0.51:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x6257220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0505 14:21:09.322483   56262 cert_rotation.go:137] Starting client certificate rotation controller
	I0505 14:21:09.322660   56262 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0505 14:21:09.330103   56262 kubeadm.go:624] The running cluster does not require reconfiguration: 192.169.0.51
	I0505 14:21:09.330115   56262 kubeadm.go:591] duration metric: took 17.1285ms to restartPrimaryControlPlane
	I0505 14:21:09.330120   56262 kubeadm.go:393] duration metric: took 36.320628ms to StartCluster
	I0505 14:21:09.330127   56262 settings.go:142] acquiring lock: {Name:mk42961bbb846d74d4f3eb396c3a07b16222feb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:09.330217   56262 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:21:09.330637   56262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/kubeconfig: {Name:mk07bec02cc3957a2a8800c4412eef88581455ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:09.330863   56262 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:21:09.330875   56262 start.go:240] waiting for startup goroutines ...
	I0505 14:21:09.330887   56262 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0505 14:21:09.373046   56262 out.go:177] * Enabled addons: 
	I0505 14:21:09.331023   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:21:09.395270   56262 addons.go:510] duration metric: took 64.318856ms for enable addons: enabled=[]
	I0505 14:21:09.395388   56262 start.go:245] waiting for cluster config update ...
	I0505 14:21:09.395406   56262 start.go:254] writing updated cluster config ...
	I0505 14:21:09.418289   56262 out.go:177] 
	I0505 14:21:09.439589   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:21:09.439723   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:21:09.462158   56262 out.go:177] * Starting "ha-671000-m02" control-plane node in "ha-671000" cluster
	I0505 14:21:09.504016   56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:21:09.504076   56262 cache.go:56] Caching tarball of preloaded images
	I0505 14:21:09.504246   56262 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0505 14:21:09.504264   56262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:21:09.504398   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:21:09.505447   56262 start.go:360] acquireMachinesLock for ha-671000-m02: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:21:09.505557   56262 start.go:364] duration metric: took 85.865µs to acquireMachinesLock for "ha-671000-m02"
	I0505 14:21:09.505582   56262 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:21:09.505589   56262 fix.go:54] fixHost starting: m02
	I0505 14:21:09.506042   56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:21:09.506080   56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:21:09.515413   56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57896
	I0505 14:21:09.515746   56262 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:21:09.516119   56262 main.go:141] libmachine: Using API Version  1
	I0505 14:21:09.516136   56262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:21:09.516414   56262 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:21:09.516555   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:09.516655   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetState
	I0505 14:21:09.516736   56262 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:21:09.516805   56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid from json: 56210
	I0505 14:21:09.517744   56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid 56210 missing from process table
	I0505 14:21:09.517764   56262 fix.go:112] recreateIfNeeded on ha-671000-m02: state=Stopped err=<nil>
	I0505 14:21:09.517774   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	W0505 14:21:09.517855   56262 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:21:09.539362   56262 out.go:177] * Restarting existing hyperkit VM for "ha-671000-m02" ...
	I0505 14:21:09.581177   56262 main.go:141] libmachine: (ha-671000-m02) Calling .Start
	I0505 14:21:09.581513   56262 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:21:09.581582   56262 main.go:141] libmachine: (ha-671000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid
	I0505 14:21:09.583319   56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid 56210 missing from process table
	I0505 14:21:09.583336   56262 main.go:141] libmachine: (ha-671000-m02) DBG | pid 56210 is in state "Stopped"
	I0505 14:21:09.583361   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid...
	I0505 14:21:09.583762   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Using UUID 294bfc97-3e6f-4d68-b3f3-54381951a5e8
	I0505 14:21:09.611765   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Generated MAC 92:83:2c:36:f7:7d
	I0505 14:21:09.611789   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
	I0505 14:21:09.611924   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"294bfc97-3e6f-4d68-b3f3-54381951a5e8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b3e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:21:09.611964   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"294bfc97-3e6f-4d68-b3f3-54381951a5e8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b3e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:21:09.612015   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "294bfc97-3e6f-4d68-b3f3-54381951a5e8", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/ha-671000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/
machines/ha-671000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
	I0505 14:21:09.612064   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 294bfc97-3e6f-4d68-b3f3-54381951a5e8 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/ha-671000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
	I0505 14:21:09.612079   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0505 14:21:09.613498   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 DEBUG: hyperkit: Pid is 56285
	I0505 14:21:09.613935   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Attempt 0
	I0505 14:21:09.613949   56262 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:21:09.614012   56262 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid from json: 56285
	I0505 14:21:09.615713   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Searching for 92:83:2c:36:f7:7d in /var/db/dhcpd_leases ...
	I0505 14:21:09.615841   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0505 14:21:09.615860   56262 main.go:141] libmachine: (ha-671000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x663949ba}
	I0505 14:21:09.615883   56262 main.go:141] libmachine: (ha-671000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x6637f817}
	I0505 14:21:09.615897   56262 main.go:141] libmachine: (ha-671000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394976}
	I0505 14:21:09.615905   56262 main.go:141] libmachine: (ha-671000-m02) DBG | Found match: 92:83:2c:36:f7:7d
	I0505 14:21:09.615916   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetConfigRaw
	I0505 14:21:09.615920   56262 main.go:141] libmachine: (ha-671000-m02) DBG | IP: 192.169.0.52
	I0505 14:21:09.616579   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
	I0505 14:21:09.616779   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:21:09.617318   56262 machine.go:94] provisionDockerMachine start ...
	I0505 14:21:09.617329   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:09.617443   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:09.617536   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:09.617633   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:09.617737   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:09.617836   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:09.617968   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:09.618123   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:09.618132   56262 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 14:21:09.621348   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0505 14:21:09.630281   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0505 14:21:09.631193   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:21:09.631218   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:21:09.631230   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:21:09.631252   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:21:10.019586   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0505 14:21:10.019603   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0505 14:21:10.134248   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:21:10.134266   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:21:10.134281   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:21:10.134292   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:21:10.135185   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0505 14:21:10.135199   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0505 14:21:15.419942   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0505 14:21:15.419970   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0505 14:21:15.419978   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0505 14:21:15.445269   56262 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:21:15 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0505 14:21:20.698093   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 14:21:20.698110   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetMachineName
	I0505 14:21:20.698266   56262 buildroot.go:166] provisioning hostname "ha-671000-m02"
	I0505 14:21:20.698277   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetMachineName
	I0505 14:21:20.698366   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:20.698443   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:20.698518   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:20.698602   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:20.698696   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:20.698824   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:20.698977   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:20.698987   56262 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671000-m02 && echo "ha-671000-m02" | sudo tee /etc/hostname
	I0505 14:21:20.773304   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000-m02
	
	I0505 14:21:20.773319   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:20.773451   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:20.773547   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:20.773625   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:20.773710   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:20.773837   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:20.773989   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:20.774000   56262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 14:21:20.846506   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 14:21:20.846523   56262 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
	I0505 14:21:20.846532   56262 buildroot.go:174] setting up certificates
	I0505 14:21:20.846537   56262 provision.go:84] configureAuth start
	I0505 14:21:20.846545   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetMachineName
	I0505 14:21:20.846678   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
	I0505 14:21:20.846753   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:20.846822   56262 provision.go:143] copyHostCerts
	I0505 14:21:20.846847   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:21:20.846900   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
	I0505 14:21:20.846906   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:21:20.847106   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
	I0505 14:21:20.847298   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:21:20.847327   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
	I0505 14:21:20.847332   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:21:20.847414   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
	I0505 14:21:20.847555   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:21:20.847584   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
	I0505 14:21:20.847588   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:21:20.847657   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
	I0505 14:21:20.847808   56262 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000-m02 san=[127.0.0.1 192.169.0.52 ha-671000-m02 localhost minikube]
	I0505 14:21:20.923054   56262 provision.go:177] copyRemoteCerts
	I0505 14:21:20.923102   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 14:21:20.923114   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:20.923242   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:20.923344   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:20.923432   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:20.923508   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	I0505 14:21:20.963007   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 14:21:20.963079   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 14:21:20.982214   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 14:21:20.982293   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0505 14:21:21.001587   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 14:21:21.001658   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 14:21:21.020765   56262 provision.go:87] duration metric: took 174.141582ms to configureAuth
	I0505 14:21:21.020780   56262 buildroot.go:189] setting minikube options for container-runtime
	I0505 14:21:21.020945   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:21:21.020958   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:21.021085   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:21.021186   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:21.021280   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:21.021382   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:21.021493   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:21.021630   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:21.021764   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:21.021777   56262 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 14:21:21.088593   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 14:21:21.088605   56262 buildroot.go:70] root file system type: tmpfs
	I0505 14:21:21.088686   56262 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 14:21:21.088698   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:21.088827   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:21.088944   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:21.089047   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:21.089155   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:21.089299   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:21.089434   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:21.089481   56262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.51"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 14:21:21.165319   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.51
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 14:21:21.165336   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:21.165469   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:21.165561   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:21.165660   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:21.165755   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:21.165892   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:21.166034   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:21.166046   56262 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 14:21:22.810399   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0505 14:21:22.810414   56262 machine.go:97] duration metric: took 13.184745912s to provisionDockerMachine
	I0505 14:21:22.810422   56262 start.go:293] postStartSetup for "ha-671000-m02" (driver="hyperkit")
	I0505 14:21:22.810435   56262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 14:21:22.810448   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:22.810630   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 14:21:22.810642   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:22.810731   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:22.810813   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:22.810958   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:22.811059   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	I0505 14:21:22.854108   56262 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 14:21:22.857587   56262 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 14:21:22.857599   56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
	I0505 14:21:22.857687   56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
	I0505 14:21:22.857827   56262 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
	I0505 14:21:22.857833   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
	I0505 14:21:22.857984   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 14:21:22.870076   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:21:22.896680   56262 start.go:296] duration metric: took 86.209325ms for postStartSetup
	I0505 14:21:22.896713   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:22.896900   56262 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0505 14:21:22.896916   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:22.897010   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:22.897116   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:22.897207   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:22.897282   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	I0505 14:21:22.937842   56262 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0505 14:21:22.937898   56262 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0505 14:21:22.971365   56262 fix.go:56] duration metric: took 13.45726146s for fixHost
	I0505 14:21:22.971396   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:22.971537   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:22.971639   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:22.971717   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:22.971804   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:22.971961   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:21:22.972106   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:21:22.972117   56262 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 14:21:23.038093   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944083.052286945
	
	I0505 14:21:23.038109   56262 fix.go:216] guest clock: 1714944083.052286945
	I0505 14:21:23.038115   56262 fix.go:229] Guest: 2024-05-05 14:21:23.052286945 -0700 PDT Remote: 2024-05-05 14:21:22.971379 -0700 PDT m=+34.042274957 (delta=80.907945ms)
	I0505 14:21:23.038125   56262 fix.go:200] guest clock delta is within tolerance: 80.907945ms
	I0505 14:21:23.038129   56262 start.go:83] releasing machines lock for "ha-671000-m02", held for 13.524025366s
	I0505 14:21:23.038145   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:23.038286   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
	I0505 14:21:23.061518   56262 out.go:177] * Found network options:
	I0505 14:21:23.083843   56262 out.go:177]   - NO_PROXY=192.169.0.51
	W0505 14:21:23.105432   56262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 14:21:23.105470   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:23.106334   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:23.106599   56262 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:21:23.106711   56262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 14:21:23.106753   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	W0505 14:21:23.106918   56262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 14:21:23.107013   56262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0505 14:21:23.107023   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:23.107033   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:21:23.107244   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:23.107275   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:21:23.107414   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:21:23.107468   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:23.107556   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:21:23.107590   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	I0505 14:21:23.107700   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	W0505 14:21:23.143066   56262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 14:21:23.143128   56262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 14:21:23.312270   56262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 14:21:23.312288   56262 start.go:494] detecting cgroup driver to use...
	I0505 14:21:23.312377   56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:21:23.327567   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0505 14:21:23.336186   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 14:21:23.344528   56262 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 14:21:23.344575   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 14:21:23.352890   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:21:23.361005   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 14:21:23.369046   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:21:23.377280   56262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 14:21:23.385827   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 14:21:23.394012   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 14:21:23.402113   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 14:21:23.410536   56262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 14:21:23.418126   56262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 14:21:23.425500   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:23.526138   56262 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 14:21:23.544818   56262 start.go:494] detecting cgroup driver to use...
	I0505 14:21:23.544892   56262 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 14:21:23.559895   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:21:23.572081   56262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 14:21:23.584840   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:21:23.595478   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:21:23.606028   56262 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0505 14:21:23.632278   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:21:23.643848   56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:21:23.658675   56262 ssh_runner.go:195] Run: which cri-dockerd
	I0505 14:21:23.661665   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 14:21:23.669850   56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 14:21:23.683220   56262 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 14:21:23.786303   56262 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 14:21:23.893788   56262 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 14:21:23.893809   56262 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 14:21:23.908293   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:24.010074   56262 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:21:26.298709   56262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.287835945s)
	I0505 14:21:26.298771   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0505 14:21:26.310190   56262 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0505 14:21:26.324652   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:21:26.336377   56262 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0505 14:21:26.435974   56262 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0505 14:21:26.534723   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:26.647643   56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0505 14:21:26.661375   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:21:26.672706   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:26.778709   56262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0505 14:21:26.840618   56262 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0505 14:21:26.840697   56262 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0505 14:21:26.844919   56262 start.go:562] Will wait 60s for crictl version
	I0505 14:21:26.844974   56262 ssh_runner.go:195] Run: which crictl
	I0505 14:21:26.849165   56262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 14:21:26.874329   56262 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0505 14:21:26.874403   56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:21:26.890208   56262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:21:26.929797   56262 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0505 14:21:26.949648   56262 out.go:177]   - env NO_PROXY=192.169.0.51
	I0505 14:21:26.970782   56262 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
	I0505 14:21:26.971166   56262 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0505 14:21:26.975958   56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:21:26.985550   56262 mustload.go:65] Loading cluster: ha-671000
	I0505 14:21:26.985727   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:21:26.985939   56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:21:26.985954   56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:21:26.994516   56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57918
	I0505 14:21:26.994869   56262 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:21:26.995203   56262 main.go:141] libmachine: Using API Version  1
	I0505 14:21:26.995220   56262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:21:26.995417   56262 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:21:26.995536   56262 main.go:141] libmachine: (ha-671000) Calling .GetState
	I0505 14:21:26.995629   56262 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:21:26.995703   56262 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 56275
	I0505 14:21:26.996652   56262 host.go:66] Checking if "ha-671000" exists ...
	I0505 14:21:26.996892   56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:21:26.996917   56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:21:27.005463   56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57920
	I0505 14:21:27.005786   56262 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:21:27.006124   56262 main.go:141] libmachine: Using API Version  1
	I0505 14:21:27.006142   56262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:21:27.006378   56262 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:21:27.006493   56262 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:21:27.006597   56262 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000 for IP: 192.169.0.52
	I0505 14:21:27.006603   56262 certs.go:194] generating shared ca certs ...
	I0505 14:21:27.006614   56262 certs.go:226] acquiring lock for ca certs: {Name:mk4a4c4cb11dfd06f304e9c6007de9e5e149a466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:21:27.006755   56262 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key
	I0505 14:21:27.006813   56262 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key
	I0505 14:21:27.006821   56262 certs.go:256] generating profile certs ...
	I0505 14:21:27.006913   56262 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key
	I0505 14:21:27.006999   56262 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.e823369f
	I0505 14:21:27.007048   56262 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key
	I0505 14:21:27.007055   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 14:21:27.007075   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 14:21:27.007095   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 14:21:27.007113   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 14:21:27.007130   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 14:21:27.007151   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 14:21:27.007170   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 14:21:27.007187   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 14:21:27.007262   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem (1338 bytes)
	W0505 14:21:27.007299   56262 certs.go:480] ignoring /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210_empty.pem, impossibly tiny 0 bytes
	I0505 14:21:27.007308   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 14:21:27.007341   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem (1078 bytes)
	I0505 14:21:27.007375   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem (1123 bytes)
	I0505 14:21:27.007408   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem (1679 bytes)
	I0505 14:21:27.007476   56262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:21:27.007517   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /usr/share/ca-certificates/542102.pem
	I0505 14:21:27.007538   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:27.007556   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem -> /usr/share/ca-certificates/54210.pem
	I0505 14:21:27.007581   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:21:27.007663   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:21:27.007746   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:21:27.007820   56262 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:21:27.007907   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:21:27.036107   56262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0505 14:21:27.039382   56262 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0505 14:21:27.047195   56262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0505 14:21:27.050362   56262 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0505 14:21:27.058524   56262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0505 14:21:27.061585   56262 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0505 14:21:27.069461   56262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0505 14:21:27.072439   56262 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0505 14:21:27.080982   56262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0505 14:21:27.084070   56262 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0505 14:21:27.092062   56262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0505 14:21:27.095149   56262 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0505 14:21:27.103105   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 14:21:27.123887   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 14:21:27.144018   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 14:21:27.164034   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 14:21:27.183960   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0505 14:21:27.204170   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 14:21:27.224085   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 14:21:27.244379   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 14:21:27.264411   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /usr/share/ca-certificates/542102.pem (1708 bytes)
	I0505 14:21:27.283983   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 14:21:27.303697   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem --> /usr/share/ca-certificates/54210.pem (1338 bytes)
	I0505 14:21:27.323613   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0505 14:21:27.337907   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0505 14:21:27.351842   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0505 14:21:27.365462   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0505 14:21:27.379337   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0505 14:21:27.393337   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0505 14:21:27.406867   56262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0505 14:21:27.420462   56262 ssh_runner.go:195] Run: openssl version
	I0505 14:21:27.425063   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/542102.pem && ln -fs /usr/share/ca-certificates/542102.pem /etc/ssl/certs/542102.pem"
	I0505 14:21:27.433747   56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/542102.pem
	I0505 14:21:27.437275   56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:08 /usr/share/ca-certificates/542102.pem
	I0505 14:21:27.437314   56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/542102.pem
	I0505 14:21:27.441663   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/542102.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 14:21:27.450070   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 14:21:27.458559   56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:27.462027   56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:27.462088   56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:21:27.466402   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 14:21:27.474903   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54210.pem && ln -fs /usr/share/ca-certificates/54210.pem /etc/ssl/certs/54210.pem"
	I0505 14:21:27.484026   56262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54210.pem
	I0505 14:21:27.487471   56262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:08 /usr/share/ca-certificates/54210.pem
	I0505 14:21:27.487506   56262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54210.pem
	I0505 14:21:27.491806   56262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/54210.pem /etc/ssl/certs/51391683.0"
	I0505 14:21:27.500356   56262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 14:21:27.503912   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 14:21:27.508255   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 14:21:27.512583   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 14:21:27.516997   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 14:21:27.521261   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 14:21:27.525514   56262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 14:21:27.529849   56262 kubeadm.go:928] updating node {m02 192.169.0.52 8443 v1.30.0 docker true true} ...
	I0505 14:21:27.529904   56262 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-671000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 14:21:27.529918   56262 kube-vip.go:111] generating kube-vip config ...
	I0505 14:21:27.529952   56262 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 14:21:27.542376   56262 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 14:21:27.542421   56262 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 14:21:27.542477   56262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 14:21:27.550208   56262 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 14:21:27.550254   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0505 14:21:27.557751   56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0505 14:21:27.571295   56262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 14:21:27.584791   56262 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0505 14:21:27.598438   56262 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0505 14:21:27.601396   56262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:21:27.610834   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:27.705062   56262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:21:27.720000   56262 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:21:27.761967   56262 out.go:177] * Verifying Kubernetes components...
	I0505 14:21:27.720191   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:21:27.783193   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:21:27.916127   56262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:21:27.937011   56262 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:21:27.937198   56262 kapi.go:59] client config for ha-671000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x6257220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0505 14:21:27.937233   56262 kubeadm.go:477] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.51:8443
	I0505 14:21:27.937400   56262 node_ready.go:35] waiting up to 6m0s for node "ha-671000-m02" to be "Ready" ...
	I0505 14:21:27.937478   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:27.937483   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:27.937491   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:27.937495   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.141758   56262 round_trippers.go:574] Response Status: 200 OK in 9202 milliseconds
	I0505 14:21:37.151494   56262 node_ready.go:49] node "ha-671000-m02" has status "Ready":"True"
	I0505 14:21:37.151510   56262 node_ready.go:38] duration metric: took 9.212150687s for node "ha-671000-m02" to be "Ready" ...
	I0505 14:21:37.151520   56262 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 14:21:37.151577   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:21:37.151583   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.151590   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.151594   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.191750   56262 round_trippers.go:574] Response Status: 200 OK in 40 milliseconds
	I0505 14:21:37.198443   56262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.198500   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:21:37.198504   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.198511   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.198515   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.209480   56262 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0505 14:21:37.210158   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:37.210166   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.210172   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.210175   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.218742   56262 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0505 14:21:37.219086   56262 pod_ready.go:92] pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:37.219096   56262 pod_ready.go:81] duration metric: took 20.63356ms for pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.219105   56262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.219148   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kjf54
	I0505 14:21:37.219153   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.219162   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.219170   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.221463   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:37.221880   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:37.221889   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.221897   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.221905   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.226727   56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:21:37.227035   56262 pod_ready.go:92] pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:37.227045   56262 pod_ready.go:81] duration metric: took 7.931899ms for pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.227052   56262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.227120   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000
	I0505 14:21:37.227125   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.227131   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.227135   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.228755   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:37.229130   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:37.229137   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.229143   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.229147   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.230595   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:37.230887   56262 pod_ready.go:92] pod "etcd-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:37.230895   56262 pod_ready.go:81] duration metric: took 3.837029ms for pod "etcd-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.230901   56262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.230929   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000-m02
	I0505 14:21:37.230934   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.230939   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.230943   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.232448   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:37.232868   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:37.232875   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.232880   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.232887   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.234369   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:37.234695   56262 pod_ready.go:92] pod "etcd-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:37.234704   56262 pod_ready.go:81] duration metric: took 3.797599ms for pod "etcd-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.234710   56262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.234742   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000-m03
	I0505 14:21:37.234747   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.234753   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.234760   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.236183   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:37.351671   56262 request.go:629] Waited for 115.086464ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:21:37.351703   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:21:37.351742   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.351749   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.351752   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.353285   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:37.353602   56262 pod_ready.go:92] pod "etcd-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:37.353612   56262 pod_ready.go:81] duration metric: took 118.878942ms for pod "etcd-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.353624   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.551816   56262 request.go:629] Waited for 198.124765ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000
	I0505 14:21:37.551893   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000
	I0505 14:21:37.551900   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.551906   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.551909   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.554076   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:37.753242   56262 request.go:629] Waited for 198.55091ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:37.753343   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:37.753355   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.753365   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.753371   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.756033   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:37.756647   56262 pod_ready.go:92] pod "kube-apiserver-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:37.756662   56262 pod_ready.go:81] duration metric: took 402.967586ms for pod "kube-apiserver-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.756670   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:37.952604   56262 request.go:629] Waited for 195.869842ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m02
	I0505 14:21:37.952645   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m02
	I0505 14:21:37.952654   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:37.952662   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:37.952668   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:37.954903   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:38.151783   56262 request.go:629] Waited for 196.293382ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:38.151830   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:38.151837   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:38.151842   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:38.151847   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:38.156373   56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:21:38.156768   56262 pod_ready.go:92] pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:38.156778   56262 pod_ready.go:81] duration metric: took 400.046736ms for pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:38.156785   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:38.351807   56262 request.go:629] Waited for 194.95401ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m03
	I0505 14:21:38.351854   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m03
	I0505 14:21:38.351862   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:38.351904   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:38.351908   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:38.354097   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:38.552842   56262 request.go:629] Waited for 198.080217ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:21:38.552968   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:21:38.552980   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:38.552990   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:38.552997   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:38.555719   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:38.556135   56262 pod_ready.go:92] pod "kube-apiserver-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:38.556146   56262 pod_ready.go:81] duration metric: took 399.298154ms for pod "kube-apiserver-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:38.556153   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:38.752061   56262 request.go:629] Waited for 195.828299ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000
	I0505 14:21:38.752126   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000
	I0505 14:21:38.752135   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:38.752148   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:38.752158   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:38.754957   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:38.951929   56262 request.go:629] Waited for 196.315529ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:38.951959   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:38.951964   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:38.951969   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:38.951973   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:38.953886   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:38.954275   56262 pod_ready.go:92] pod "kube-controller-manager-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:38.954284   56262 pod_ready.go:81] duration metric: took 398.072724ms for pod "kube-controller-manager-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:38.954297   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:39.151925   56262 request.go:629] Waited for 197.547759ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:39.152007   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:39.152019   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:39.152025   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:39.152029   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:39.157962   56262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0505 14:21:39.352575   56262 request.go:629] Waited for 194.147234ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:39.352619   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:39.352625   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:39.352631   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:39.352635   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:39.356708   56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:21:39.553301   56262 request.go:629] Waited for 97.737035ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:39.553334   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:39.553340   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:39.553346   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:39.553351   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:39.555371   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:39.752052   56262 request.go:629] Waited for 196.251955ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:39.752134   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:39.752145   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:39.752153   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:39.752158   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:39.754627   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:39.955025   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:39.955059   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:39.955067   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:39.955072   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:39.956871   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:40.152049   56262 request.go:629] Waited for 194.641301ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:40.152132   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:40.152171   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:40.152184   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:40.152191   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:40.154660   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:40.456022   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:40.456041   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:40.456050   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:40.456056   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:40.458617   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:40.552124   56262 request.go:629] Waited for 92.99221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:40.552206   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:40.552212   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:40.552220   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:40.552225   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:40.554220   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:40.956144   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:40.956162   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:40.956168   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:40.956172   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:40.958759   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:40.959215   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:40.959223   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:40.959229   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:40.959232   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:40.960907   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:40.961228   56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:41.455646   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:41.455689   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:41.455698   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:41.455722   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:41.457872   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:41.458331   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:41.458339   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:41.458344   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:41.458355   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:41.460082   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:41.955474   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:41.955516   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:41.955524   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:41.955528   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:41.957597   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:41.958178   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:41.958186   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:41.958190   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:41.958193   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:41.960269   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:42.454954   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:42.454969   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:42.454975   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:42.454978   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:42.456939   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:42.457382   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:42.457390   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:42.457395   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:42.457398   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:42.459026   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:42.955443   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:42.955465   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:42.955493   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:42.955500   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:42.957908   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:42.958355   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:42.958362   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:42.958368   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:42.958371   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:42.959853   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:43.455723   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:43.455776   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:43.455798   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:43.455806   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:43.458560   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:43.458997   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:43.459004   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:43.459009   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:43.459013   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:43.460509   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:43.460811   56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:43.955429   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:43.955470   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:43.955481   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:43.955487   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:43.957836   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:43.958298   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:43.958305   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:43.958310   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:43.958320   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:43.960083   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:44.455061   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:44.455081   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:44.455088   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:44.455091   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:44.458998   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:44.459504   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:44.459511   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:44.459517   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:44.459521   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:44.461518   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:44.956537   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:44.956577   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:44.956598   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:44.956604   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:44.959253   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:44.959715   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:44.959723   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:44.959729   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:44.959733   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:44.961411   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:45.455377   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:45.455402   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:45.455414   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:45.455420   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:45.458080   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:45.458718   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:45.458729   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:45.458736   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:45.458752   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:45.463742   56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:21:45.464348   56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:45.955580   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:45.955620   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:45.955630   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:45.955635   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:45.957968   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:45.958442   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:45.958449   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:45.958455   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:45.958466   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:45.959999   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:46.457118   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:46.457136   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:46.457145   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:46.457149   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:46.459543   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:46.460023   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:46.460031   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:46.460036   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:46.460047   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:46.461647   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:46.956302   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:46.956318   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:46.956324   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:46.956326   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:46.958416   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:46.958859   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:46.958866   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:46.958872   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:46.958874   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:46.960501   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:47.456753   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:47.456797   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:47.456806   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:47.456812   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:47.458891   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:47.459328   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:47.459336   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:47.459342   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:47.459345   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:47.460911   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:47.955503   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:47.955545   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:47.955558   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:47.955564   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:47.959575   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:47.960158   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:47.960166   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:47.960171   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:47.960175   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:47.961799   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:47.962164   56262 pod_ready.go:102] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:48.456730   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:21:48.456747   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:48.456753   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:48.456757   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:48.460539   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:48.461047   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:48.461055   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:48.461061   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:48.461064   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:48.465508   56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:21:48.465989   56262 pod_ready.go:92] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:48.465998   56262 pod_ready.go:81] duration metric: took 9.510763792s for pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:48.466006   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:48.466042   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m03
	I0505 14:21:48.466047   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:48.466052   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:48.466055   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:48.472370   56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 14:21:48.473005   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:21:48.473012   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:48.473017   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:48.473020   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:48.481996   56262 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0505 14:21:48.482501   56262 pod_ready.go:92] pod "kube-controller-manager-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:48.482510   56262 pod_ready.go:81] duration metric: took 16.497528ms for pod "kube-controller-manager-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:48.482517   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5jwqs" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:48.482551   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:48.482556   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:48.482561   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:48.482565   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:48.490468   56262 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0505 14:21:48.491138   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:48.491145   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:48.491151   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:48.491155   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:48.494380   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:48.983087   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:49.004024   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:49.004031   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:49.004035   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:49.006380   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:49.007016   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:49.007024   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:49.007030   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:49.007033   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:49.008914   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:49.483919   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:49.483931   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:49.483938   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:49.483941   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:49.486104   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:49.486673   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:49.486681   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:49.486687   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:49.486691   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:49.488609   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:49.983081   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:49.983096   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:49.983104   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:49.983108   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:49.985873   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:49.986420   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:49.986428   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:49.986434   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:49.986437   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:49.988349   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:50.482957   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:50.482970   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:50.482976   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:50.482980   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:50.485479   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:50.485920   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:50.485927   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:50.485934   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:50.485938   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:50.487720   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:50.488107   56262 pod_ready.go:102] pod "kube-proxy-5jwqs" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:50.983210   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:50.983225   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:50.983232   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:50.983236   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:50.986255   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:50.986840   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:50.986849   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:50.986855   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:50.986866   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:50.989948   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:51.483355   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:51.483374   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:51.483388   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:51.483395   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:51.486820   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:51.487280   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:51.487287   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:51.487293   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:51.487297   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:51.489325   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:51.983090   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:51.983105   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:51.983112   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:51.983115   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:51.984988   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:51.985393   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:51.985401   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:51.985405   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:51.985410   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:51.986930   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:52.484493   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:52.484507   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:52.484516   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:52.484521   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:52.487250   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:52.487686   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:52.487694   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:52.487698   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:52.487702   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:52.489501   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:52.489895   56262 pod_ready.go:102] pod "kube-proxy-5jwqs" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:52.983025   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:52.983048   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:52.983059   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:52.983066   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:52.986110   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:52.986621   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:52.986629   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:52.986634   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:52.986639   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:52.988098   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:53.484742   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:53.484762   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:53.484773   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:53.484779   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:53.488010   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:53.488477   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:53.488487   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:53.488495   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:53.488501   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:53.490598   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:53.982981   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:54.035555   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.035577   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.035582   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.038056   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:54.038420   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:54.038427   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.038431   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.038436   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.040740   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:54.483231   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:21:54.483250   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.483259   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.483268   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.486904   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:54.487432   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:21:54.487440   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.487445   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.487453   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.489085   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:54.489450   56262 pod_ready.go:92] pod "kube-proxy-5jwqs" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:54.489459   56262 pod_ready.go:81] duration metric: took 6.006607245s for pod "kube-proxy-5jwqs" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:54.489472   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b45s6" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:54.489506   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b45s6
	I0505 14:21:54.489511   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.489516   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.489520   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.491341   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:54.492125   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m04
	I0505 14:21:54.492155   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.492161   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.492166   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.494017   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:54.494387   56262 pod_ready.go:92] pod "kube-proxy-b45s6" in "kube-system" namespace has status "Ready":"True"
	I0505 14:21:54.494395   56262 pod_ready.go:81] duration metric: took 4.917824ms for pod "kube-proxy-b45s6" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:54.494401   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kppdj" in "kube-system" namespace to be "Ready" ...
	I0505 14:21:54.494436   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:54.494441   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.494447   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.494452   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.496166   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:54.496620   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:54.496627   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.496633   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.496637   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.498306   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:54.996074   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:54.996123   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.996136   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.996145   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:54.999201   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:54.999706   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:54.999714   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:54.999720   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:54.999724   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:55.001519   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:55.495423   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:55.495482   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:55.495494   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:55.495500   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:55.498280   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:55.498730   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:55.498738   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:55.498744   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:55.498748   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:55.500462   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:55.995317   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:55.995337   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:55.995349   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:55.995356   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:55.998789   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:55.999222   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:55.999231   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:55.999238   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:55.999241   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:56.001041   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:56.494888   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:56.494946   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:56.494958   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:56.494968   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:56.497790   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:56.498347   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:56.498358   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:56.498365   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:56.498371   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:56.500278   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:56.500656   56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:56.994875   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:56.994892   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:56.994900   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:56.994906   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:56.998618   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:56.999206   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:56.999214   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:56.999220   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:56.999223   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:57.000855   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:57.495334   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:57.495358   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:57.495370   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:57.495375   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:57.498502   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:57.498951   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:57.498958   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:57.498963   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:57.498966   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:57.500746   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:57.995520   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:57.995543   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:57.995579   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:57.995598   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:57.998407   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:57.998972   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:57.998979   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:57.998985   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:57.999001   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:58.000625   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:58.495031   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:58.495049   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:58.495061   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:58.495067   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:58.498099   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:58.498667   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:58.498677   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:58.498685   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:58.498691   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:58.500315   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:58.995219   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:59.001733   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:59.001744   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:59.001750   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:59.004276   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:59.004776   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:59.004783   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:59.004788   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:59.004792   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:59.006346   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:21:59.006731   56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
	I0505 14:21:59.495209   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:59.495224   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:59.495243   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:59.495269   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:59.498470   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:59.498897   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:59.498905   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:59.498911   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:59.498915   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:59.501440   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:21:59.995151   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:21:59.995179   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:59.995191   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:59.995198   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:21:59.998453   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:21:59.999020   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:21:59.999031   56262 round_trippers.go:469] Request Headers:
	I0505 14:21:59.999039   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:21:59.999043   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:00.000983   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:00.495135   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:00.495148   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:00.495154   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:00.495158   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:00.498254   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:22:00.499175   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:00.499184   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:00.499190   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:00.499193   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:00.501895   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:00.995194   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:00.995216   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:00.995229   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:00.995237   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:00.998468   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:22:00.998920   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:00.998926   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:00.998932   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:00.998935   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:01.000600   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:01.494835   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:01.494860   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:01.494871   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:01.494877   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:01.497889   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:22:01.498547   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:01.498554   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:01.498558   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:01.498561   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:01.500447   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:01.500751   56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
	I0505 14:22:01.996453   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:01.996472   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:01.996483   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:01.996490   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:01.999407   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:01.999918   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:01.999925   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:01.999931   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:01.999934   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:02.001706   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:02.495361   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:02.495382   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:02.495393   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:02.495400   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:02.498902   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:22:02.499504   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:02.499511   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:02.499517   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:02.499521   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:02.501049   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:02.995527   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:02.995548   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:02.995559   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:02.995565   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:02.998530   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:02.998981   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:02.998988   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:02.998994   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:02.998999   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:03.000798   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:03.495714   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:03.495730   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:03.495737   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:03.495741   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:03.498051   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:03.498563   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:03.498571   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:03.498576   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:03.498588   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:03.500374   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:03.995061   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:04.002434   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:04.002442   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:04.002447   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:04.004861   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:04.005402   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:04.005409   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:04.005415   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:04.005418   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:04.011753   56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 14:22:04.012403   56262 pod_ready.go:102] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"False"
	I0505 14:22:04.494873   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:04.494893   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:04.494902   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:04.494906   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:04.497460   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:04.497938   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:04.497946   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:04.497951   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:04.497960   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:04.499356   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:04.995159   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:04.995178   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:04.995188   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:04.995195   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:04.998687   56262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:22:04.999335   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:04.999342   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:04.999348   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:04.999353   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.000905   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.494984   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:22:05.494997   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.495003   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.495007   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.497333   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:05.497727   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:05.497735   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.497741   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.497744   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.499501   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.500069   56262 pod_ready.go:92] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"True"
	I0505 14:22:05.500079   56262 pod_ready.go:81] duration metric: took 11.005361676s for pod "kube-proxy-kppdj" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.500095   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zwgd2" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.500132   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zwgd2
	I0505 14:22:05.500137   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.500142   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.500146   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.502320   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:05.502750   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:22:05.502757   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.502763   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.502767   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.504769   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.505126   56262 pod_ready.go:92] pod "kube-proxy-zwgd2" in "kube-system" namespace has status "Ready":"True"
	I0505 14:22:05.505135   56262 pod_ready.go:81] duration metric: took 5.036025ms for pod "kube-proxy-zwgd2" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.505142   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.505179   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000
	I0505 14:22:05.505184   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.505189   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.505194   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.507083   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.507461   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:22:05.507468   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.507473   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.507477   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.509224   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.509709   56262 pod_ready.go:92] pod "kube-scheduler-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:22:05.509724   56262 pod_ready.go:81] duration metric: took 4.57068ms for pod "kube-scheduler-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.509732   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.509767   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000-m02
	I0505 14:22:05.509771   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.509777   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.509780   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.511597   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.511989   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:22:05.511996   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.512000   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.512010   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.514080   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:05.514548   56262 pod_ready.go:92] pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:22:05.514556   56262 pod_ready.go:81] duration metric: took 4.819427ms for pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.514563   56262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.514599   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000-m03
	I0505 14:22:05.514603   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.514609   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.514612   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.516436   56262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:22:05.516907   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m03
	I0505 14:22:05.516914   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.516919   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.516923   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.519043   56262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:22:05.519280   56262 pod_ready.go:92] pod "kube-scheduler-ha-671000-m03" in "kube-system" namespace has status "Ready":"True"
	I0505 14:22:05.519288   56262 pod_ready.go:81] duration metric: took 4.719804ms for pod "kube-scheduler-ha-671000-m03" in "kube-system" namespace to be "Ready" ...
	I0505 14:22:05.519294   56262 pod_ready.go:38] duration metric: took 28.365933714s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 14:22:05.519320   56262 api_server.go:52] waiting for apiserver process to appear ...
	I0505 14:22:05.519375   56262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:22:05.533426   56262 api_server.go:72] duration metric: took 37.809561996s to wait for apiserver process to appear ...
	I0505 14:22:05.533438   56262 api_server.go:88] waiting for apiserver healthz status ...
	I0505 14:22:05.533454   56262 api_server.go:253] Checking apiserver healthz at https://192.169.0.51:8443/healthz ...
	I0505 14:22:05.537141   56262 api_server.go:279] https://192.169.0.51:8443/healthz returned 200:
	ok
	I0505 14:22:05.537173   56262 round_trippers.go:463] GET https://192.169.0.51:8443/version
	I0505 14:22:05.537183   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.537191   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.537195   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.537884   56262 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0505 14:22:05.538028   56262 api_server.go:141] control plane version: v1.30.0
	I0505 14:22:05.538038   56262 api_server.go:131] duration metric: took 4.594882ms to wait for apiserver health ...
	I0505 14:22:05.538049   56262 system_pods.go:43] waiting for kube-system pods to appear ...
	I0505 14:22:05.696401   56262 request.go:629] Waited for 158.305976ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:22:05.696517   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:22:05.696529   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.696539   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.696547   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.703009   56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 14:22:05.708412   56262 system_pods.go:59] 26 kube-system pods found
	I0505 14:22:05.708432   56262 system_pods.go:61] "coredns-7db6d8ff4d-hqtd2" [e76b43f2-8189-4e5d-adc3-ced554e9ee07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0505 14:22:05.708439   56262 system_pods.go:61] "coredns-7db6d8ff4d-kjf54" [c780145e-9d82-4451-94e8-dee09a63eadb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0505 14:22:05.708445   56262 system_pods.go:61] "etcd-ha-671000" [ea35bd5e-5a34-48e9-a9b7-4b200b88fb13] Running
	I0505 14:22:05.708448   56262 system_pods.go:61] "etcd-ha-671000-m02" [15f721f6-9618-44f4-9160-dbf9f0a41f73] Running
	I0505 14:22:05.708451   56262 system_pods.go:61] "etcd-ha-671000-m03" [67d2962f-d3f7-42d2-8334-cc42cb3ca5a5] Running
	I0505 14:22:05.708458   56262 system_pods.go:61] "kindnet-cbt9x" [c35bdc79-4b12-4822-ae38-767c7d16c96a] Running
	I0505 14:22:05.708462   56262 system_pods.go:61] "kindnet-ffg2p" [043d485d-6127-4de3-9e4f-cfbf554fa987] Running
	I0505 14:22:05.708464   56262 system_pods.go:61] "kindnet-kn94d" [863e615e-f22a-4d15-8510-4f5c7a42b8cd] Running
	I0505 14:22:05.708468   56262 system_pods.go:61] "kindnet-zvz9x" [17260177-9933-46e9-85d2-86fe51806c25] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0505 14:22:05.708471   56262 system_pods.go:61] "kube-apiserver-ha-671000" [a6f2585c-aec7-4ba9-aa78-e22c55a798ea] Running
	I0505 14:22:05.708474   56262 system_pods.go:61] "kube-apiserver-ha-671000-m02" [bbb10014-5b6a-4377-8e62-a26e6925ee07] Running
	I0505 14:22:05.708477   56262 system_pods.go:61] "kube-apiserver-ha-671000-m03" [83168cbb-d4d0-4793-9f59-1c8cd4c2616f] Running
	I0505 14:22:05.708482   56262 system_pods.go:61] "kube-controller-manager-ha-671000" [9f4e9073-99da-4ed5-8b5f-72106d630807] Running
	I0505 14:22:05.708487   56262 system_pods.go:61] "kube-controller-manager-ha-671000-m02" [074a2d58-b5a5-4fd7-8c03-c1a357ee0c4f] Running
	I0505 14:22:05.708489   56262 system_pods.go:61] "kube-controller-manager-ha-671000-m03" [c7fa4cb4-20a0-431f-8f1a-fb9bf2f0d702] Running
	I0505 14:22:05.708493   56262 system_pods.go:61] "kube-proxy-5jwqs" [72f1cbf9-ca3e-4354-a8f8-7239c77af74a] Running
	I0505 14:22:05.708495   56262 system_pods.go:61] "kube-proxy-b45s6" [4d403d96-8102-44d7-a76f-3a64f30a7132] Running
	I0505 14:22:05.708497   56262 system_pods.go:61] "kube-proxy-kppdj" [5b47d66e-31b1-4892-85ef-0c3ad3bec4cb] Running
	I0505 14:22:05.708500   56262 system_pods.go:61] "kube-proxy-zwgd2" [e87cf8e2-923f-499e-a740-60cd8b02b805] Running
	I0505 14:22:05.708502   56262 system_pods.go:61] "kube-scheduler-ha-671000" [1ae249c2-7cd6-4c14-80aa-1d88d491dfc2] Running
	I0505 14:22:05.708505   56262 system_pods.go:61] "kube-scheduler-ha-671000-m02" [27dd9d5f-8e2a-4597-b743-0c79fa5df5b1] Running
	I0505 14:22:05.708507   56262 system_pods.go:61] "kube-scheduler-ha-671000-m03" [0c85a6ad-ae3e-4895-8a7f-f36385b1eb0b] Running
	I0505 14:22:05.708510   56262 system_pods.go:61] "kube-vip-ha-671000" [dcc7956d-0333-45ed-afff-b8429485ef9a] Running
	I0505 14:22:05.708512   56262 system_pods.go:61] "kube-vip-ha-671000-m02" [9a09b965-61cb-4026-9bcd-0daa29f18c86] Running
	I0505 14:22:05.708515   56262 system_pods.go:61] "kube-vip-ha-671000-m03" [4866dc28-b7e1-4387-8e4a-cf819f426faa] Running
	I0505 14:22:05.708520   56262 system_pods.go:61] "storage-provisioner" [f376315c-5f9b-46f4-b295-6d7d025063bc] Running
	I0505 14:22:05.708525   56262 system_pods.go:74] duration metric: took 170.469417ms to wait for pod list to return data ...
	I0505 14:22:05.708531   56262 default_sa.go:34] waiting for default service account to be created ...
	I0505 14:22:05.897069   56262 request.go:629] Waited for 188.474109ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/default/serviceaccounts
	I0505 14:22:05.897179   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/default/serviceaccounts
	I0505 14:22:05.897186   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:05.897194   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:05.897199   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:05.950188   56262 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0505 14:22:05.950392   56262 default_sa.go:45] found service account: "default"
	I0505 14:22:05.950405   56262 default_sa.go:55] duration metric: took 241.864725ms for default service account to be created ...
	I0505 14:22:05.950412   56262 system_pods.go:116] waiting for k8s-apps to be running ...
	I0505 14:22:06.095263   56262 request.go:629] Waited for 144.804696ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:22:06.095366   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:22:06.095376   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:06.095388   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:06.095395   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:06.102144   56262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 14:22:06.107768   56262 system_pods.go:86] 26 kube-system pods found
	I0505 14:22:06.107783   56262 system_pods.go:89] "coredns-7db6d8ff4d-hqtd2" [e76b43f2-8189-4e5d-adc3-ced554e9ee07] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0505 14:22:06.107794   56262 system_pods.go:89] "coredns-7db6d8ff4d-kjf54" [c780145e-9d82-4451-94e8-dee09a63eadb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0505 14:22:06.107800   56262 system_pods.go:89] "etcd-ha-671000" [ea35bd5e-5a34-48e9-a9b7-4b200b88fb13] Running
	I0505 14:22:06.107803   56262 system_pods.go:89] "etcd-ha-671000-m02" [15f721f6-9618-44f4-9160-dbf9f0a41f73] Running
	I0505 14:22:06.107808   56262 system_pods.go:89] "etcd-ha-671000-m03" [67d2962f-d3f7-42d2-8334-cc42cb3ca5a5] Running
	I0505 14:22:06.107811   56262 system_pods.go:89] "kindnet-cbt9x" [c35bdc79-4b12-4822-ae38-767c7d16c96a] Running
	I0505 14:22:06.107815   56262 system_pods.go:89] "kindnet-ffg2p" [043d485d-6127-4de3-9e4f-cfbf554fa987] Running
	I0505 14:22:06.107818   56262 system_pods.go:89] "kindnet-kn94d" [863e615e-f22a-4d15-8510-4f5c7a42b8cd] Running
	I0505 14:22:06.107823   56262 system_pods.go:89] "kindnet-zvz9x" [17260177-9933-46e9-85d2-86fe51806c25] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0505 14:22:06.107826   56262 system_pods.go:89] "kube-apiserver-ha-671000" [a6f2585c-aec7-4ba9-aa78-e22c55a798ea] Running
	I0505 14:22:06.107831   56262 system_pods.go:89] "kube-apiserver-ha-671000-m02" [bbb10014-5b6a-4377-8e62-a26e6925ee07] Running
	I0505 14:22:06.107834   56262 system_pods.go:89] "kube-apiserver-ha-671000-m03" [83168cbb-d4d0-4793-9f59-1c8cd4c2616f] Running
	I0505 14:22:06.107838   56262 system_pods.go:89] "kube-controller-manager-ha-671000" [9f4e9073-99da-4ed5-8b5f-72106d630807] Running
	I0505 14:22:06.107842   56262 system_pods.go:89] "kube-controller-manager-ha-671000-m02" [074a2d58-b5a5-4fd7-8c03-c1a357ee0c4f] Running
	I0505 14:22:06.107847   56262 system_pods.go:89] "kube-controller-manager-ha-671000-m03" [c7fa4cb4-20a0-431f-8f1a-fb9bf2f0d702] Running
	I0505 14:22:06.107854   56262 system_pods.go:89] "kube-proxy-5jwqs" [72f1cbf9-ca3e-4354-a8f8-7239c77af74a] Running
	I0505 14:22:06.107862   56262 system_pods.go:89] "kube-proxy-b45s6" [4d403d96-8102-44d7-a76f-3a64f30a7132] Running
	I0505 14:22:06.107866   56262 system_pods.go:89] "kube-proxy-kppdj" [5b47d66e-31b1-4892-85ef-0c3ad3bec4cb] Running
	I0505 14:22:06.107869   56262 system_pods.go:89] "kube-proxy-zwgd2" [e87cf8e2-923f-499e-a740-60cd8b02b805] Running
	I0505 14:22:06.107874   56262 system_pods.go:89] "kube-scheduler-ha-671000" [1ae249c2-7cd6-4c14-80aa-1d88d491dfc2] Running
	I0505 14:22:06.107877   56262 system_pods.go:89] "kube-scheduler-ha-671000-m02" [27dd9d5f-8e2a-4597-b743-0c79fa5df5b1] Running
	I0505 14:22:06.107887   56262 system_pods.go:89] "kube-scheduler-ha-671000-m03" [0c85a6ad-ae3e-4895-8a7f-f36385b1eb0b] Running
	I0505 14:22:06.107890   56262 system_pods.go:89] "kube-vip-ha-671000" [dcc7956d-0333-45ed-afff-b8429485ef9a] Running
	I0505 14:22:06.107894   56262 system_pods.go:89] "kube-vip-ha-671000-m02" [9a09b965-61cb-4026-9bcd-0daa29f18c86] Running
	I0505 14:22:06.107897   56262 system_pods.go:89] "kube-vip-ha-671000-m03" [4866dc28-b7e1-4387-8e4a-cf819f426faa] Running
	I0505 14:22:06.107900   56262 system_pods.go:89] "storage-provisioner" [f376315c-5f9b-46f4-b295-6d7d025063bc] Running
	I0505 14:22:06.107905   56262 system_pods.go:126] duration metric: took 157.48572ms to wait for k8s-apps to be running ...
	I0505 14:22:06.107910   56262 system_svc.go:44] waiting for kubelet service to be running ....
	I0505 14:22:06.107954   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 14:22:06.119916   56262 system_svc.go:56] duration metric: took 12.002036ms WaitForService to wait for kubelet
	I0505 14:22:06.119930   56262 kubeadm.go:576] duration metric: took 38.396059047s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:22:06.119941   56262 node_conditions.go:102] verifying NodePressure condition ...
	I0505 14:22:06.295252   56262 request.go:629] Waited for 175.271788ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes
	I0505 14:22:06.295332   56262 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes
	I0505 14:22:06.295338   56262 round_trippers.go:469] Request Headers:
	I0505 14:22:06.295345   56262 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:22:06.295350   56262 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:22:06.299820   56262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:22:06.300760   56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 14:22:06.300774   56262 node_conditions.go:123] node cpu capacity is 2
	I0505 14:22:06.300783   56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 14:22:06.300787   56262 node_conditions.go:123] node cpu capacity is 2
	I0505 14:22:06.300791   56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 14:22:06.300794   56262 node_conditions.go:123] node cpu capacity is 2
	I0505 14:22:06.300797   56262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 14:22:06.300801   56262 node_conditions.go:123] node cpu capacity is 2
	I0505 14:22:06.300804   56262 node_conditions.go:105] duration metric: took 180.85639ms to run NodePressure ...
	I0505 14:22:06.300811   56262 start.go:240] waiting for startup goroutines ...
	I0505 14:22:06.300829   56262 start.go:254] writing updated cluster config ...
	I0505 14:22:06.322636   56262 out.go:177] 
	I0505 14:22:06.343913   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:22:06.344042   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:22:06.366539   56262 out.go:177] * Starting "ha-671000-m03" control-plane node in "ha-671000" cluster
	I0505 14:22:06.408466   56262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:22:06.408493   56262 cache.go:56] Caching tarball of preloaded images
	I0505 14:22:06.408686   56262 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0505 14:22:06.408703   56262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:22:06.408834   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:22:06.409908   56262 start.go:360] acquireMachinesLock for ha-671000-m03: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:22:06.409993   56262 start.go:364] duration metric: took 67.566µs to acquireMachinesLock for "ha-671000-m03"
	I0505 14:22:06.410011   56262 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:22:06.410016   56262 fix.go:54] fixHost starting: m03
	I0505 14:22:06.410315   56262 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:22:06.410333   56262 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:22:06.419592   56262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57925
	I0505 14:22:06.419993   56262 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:22:06.420359   56262 main.go:141] libmachine: Using API Version  1
	I0505 14:22:06.420375   56262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:22:06.420588   56262 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:22:06.420701   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:06.420780   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetState
	I0505 14:22:06.420862   56262 main.go:141] libmachine: (ha-671000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:22:06.420955   56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid from json: 55740
	I0505 14:22:06.421873   56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid 55740 missing from process table
	I0505 14:22:06.421938   56262 fix.go:112] recreateIfNeeded on ha-671000-m03: state=Stopped err=<nil>
	I0505 14:22:06.421958   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	W0505 14:22:06.422054   56262 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:22:06.443498   56262 out.go:177] * Restarting existing hyperkit VM for "ha-671000-m03" ...
	I0505 14:22:06.485588   56262 main.go:141] libmachine: (ha-671000-m03) Calling .Start
	I0505 14:22:06.485823   56262 main.go:141] libmachine: (ha-671000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:22:06.485876   56262 main.go:141] libmachine: (ha-671000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid
	I0505 14:22:06.487603   56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid 55740 missing from process table
	I0505 14:22:06.487617   56262 main.go:141] libmachine: (ha-671000-m03) DBG | pid 55740 is in state "Stopped"
	I0505 14:22:06.487633   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid...
	I0505 14:22:06.488242   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Using UUID be90591f-7869-4905-ae38-2f481381ca7c
	I0505 14:22:06.514163   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Generated MAC ce:17:a:56:1e:f8
	I0505 14:22:06.514197   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
	I0505 14:22:06.514318   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"be90591f-7869-4905-ae38-2f481381ca7c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003be9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:22:06.514365   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"be90591f-7869-4905-ae38-2f481381ca7c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003be9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:22:06.514413   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "be90591f-7869-4905-ae38-2f481381ca7c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/ha-671000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/
machines/ha-671000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
	I0505 14:22:06.514460   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U be90591f-7869-4905-ae38-2f481381ca7c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/ha-671000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
	I0505 14:22:06.514470   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0505 14:22:06.515957   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 DEBUG: hyperkit: Pid is 56300
	I0505 14:22:06.516349   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Attempt 0
	I0505 14:22:06.516370   56262 main.go:141] libmachine: (ha-671000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:22:06.516444   56262 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid from json: 56300
	I0505 14:22:06.518246   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Searching for ce:17:a:56:1e:f8 in /var/db/dhcpd_leases ...
	I0505 14:22:06.518360   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0505 14:22:06.518376   56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x663949ce}
	I0505 14:22:06.518417   56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x663949ba}
	I0505 14:22:06.518433   56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x6637f817}
	I0505 14:22:06.518449   56262 main.go:141] libmachine: (ha-671000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x663948d2}
	I0505 14:22:06.518457   56262 main.go:141] libmachine: (ha-671000-m03) DBG | Found match: ce:17:a:56:1e:f8
	I0505 14:22:06.518467   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetConfigRaw
	I0505 14:22:06.518473   56262 main.go:141] libmachine: (ha-671000-m03) DBG | IP: 192.169.0.53
	I0505 14:22:06.519132   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetIP
	I0505 14:22:06.519357   56262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:22:06.519808   56262 machine.go:94] provisionDockerMachine start ...
	I0505 14:22:06.519818   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:06.519942   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:06.520079   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:06.520182   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:06.520284   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:06.520381   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:06.520488   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:06.520648   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:06.520655   56262 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 14:22:06.524407   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0505 14:22:06.532556   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0505 14:22:06.533607   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:22:06.533622   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:22:06.533633   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:22:06.533644   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:22:06.917916   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0505 14:22:06.917942   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:06 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0505 14:22:07.032632   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:22:07.032653   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:22:07.032677   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:22:07.032689   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:22:07.033533   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0505 14:22:07.033546   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:07 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0505 14:22:12.402771   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0505 14:22:12.402786   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0505 14:22:12.402806   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0505 14:22:12.426606   56262 main.go:141] libmachine: (ha-671000-m03) DBG | 2024/05/05 14:22:12 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0505 14:22:41.581350   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 14:22:41.581367   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetMachineName
	I0505 14:22:41.581506   56262 buildroot.go:166] provisioning hostname "ha-671000-m03"
	I0505 14:22:41.581517   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetMachineName
	I0505 14:22:41.581600   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.581683   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:41.581781   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.581875   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.581960   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:41.582100   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:41.582238   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:41.582247   56262 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671000-m03 && echo "ha-671000-m03" | sudo tee /etc/hostname
	I0505 14:22:41.647083   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000-m03
	
	I0505 14:22:41.647098   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.647232   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:41.647343   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.647430   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.647521   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:41.647657   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:41.647849   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:41.647862   56262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 14:22:41.709306   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 14:22:41.709326   56262 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
	I0505 14:22:41.709344   56262 buildroot.go:174] setting up certificates
	I0505 14:22:41.709357   56262 provision.go:84] configureAuth start
	I0505 14:22:41.709363   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetMachineName
	I0505 14:22:41.709499   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetIP
	I0505 14:22:41.709593   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.709680   56262 provision.go:143] copyHostCerts
	I0505 14:22:41.709715   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:22:41.709786   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
	I0505 14:22:41.709792   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:22:41.709937   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
	I0505 14:22:41.710168   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:22:41.710212   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
	I0505 14:22:41.710217   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:22:41.710297   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
	I0505 14:22:41.710445   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:22:41.710490   56262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
	I0505 14:22:41.710497   56262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:22:41.710575   56262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
	I0505 14:22:41.710718   56262 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000-m03 san=[127.0.0.1 192.169.0.53 ha-671000-m03 localhost minikube]
	I0505 14:22:41.753782   56262 provision.go:177] copyRemoteCerts
	I0505 14:22:41.753842   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 14:22:41.753857   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.753999   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:41.754106   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.754195   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:41.754274   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
	I0505 14:22:41.788993   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 14:22:41.789066   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 14:22:41.808008   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 14:22:41.808084   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0505 14:22:41.828147   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 14:22:41.828228   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 14:22:41.848543   56262 provision.go:87] duration metric: took 139.178952ms to configureAuth
	I0505 14:22:41.848558   56262 buildroot.go:189] setting minikube options for container-runtime
	I0505 14:22:41.848732   56262 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:22:41.848746   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:41.848890   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.848974   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:41.849066   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.849145   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.849226   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:41.849346   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:41.849468   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:41.849476   56262 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 14:22:41.905134   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 14:22:41.905147   56262 buildroot.go:70] root file system type: tmpfs
	I0505 14:22:41.905226   56262 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 14:22:41.905236   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.905372   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:41.905459   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.905559   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.905645   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:41.905773   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:41.905913   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:41.905965   56262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.51"
	Environment="NO_PROXY=192.169.0.51,192.169.0.52"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 14:22:41.971506   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.51
	Environment=NO_PROXY=192.169.0.51,192.169.0.52
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 14:22:41.971532   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:41.971667   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:41.971753   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.971832   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:41.971919   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:41.972061   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:41.972206   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:41.972218   56262 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 14:22:43.586757   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0505 14:22:43.586772   56262 machine.go:97] duration metric: took 37.066967123s to provisionDockerMachine
	I0505 14:22:43.586795   56262 start.go:293] postStartSetup for "ha-671000-m03" (driver="hyperkit")
	I0505 14:22:43.586804   56262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 14:22:43.586816   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:43.587008   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 14:22:43.587022   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:43.587109   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:43.587250   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:43.587368   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:43.587470   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
	I0505 14:22:43.621728   56262 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 14:22:43.624913   56262 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 14:22:43.624927   56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
	I0505 14:22:43.625027   56262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
	I0505 14:22:43.625208   56262 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
	I0505 14:22:43.625215   56262 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
	I0505 14:22:43.625422   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 14:22:43.632883   56262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:22:43.652930   56262 start.go:296] duration metric: took 66.125789ms for postStartSetup
	I0505 14:22:43.652964   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:43.653131   56262 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0505 14:22:43.653145   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:43.653240   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:43.653328   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:43.653413   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:43.653505   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
	I0505 14:22:43.687474   56262 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0505 14:22:43.687532   56262 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0505 14:22:43.719424   56262 fix.go:56] duration metric: took 37.309414657s for fixHost
	I0505 14:22:43.719447   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:43.719581   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:43.719680   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:43.719767   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:43.719859   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:43.719991   56262 main.go:141] libmachine: Using SSH client type: native
	I0505 14:22:43.720140   56262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4db1b80] 0x4db48e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0505 14:22:43.720147   56262 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 14:22:43.777003   56262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944163.917671963
	
	I0505 14:22:43.777016   56262 fix.go:216] guest clock: 1714944163.917671963
	I0505 14:22:43.777022   56262 fix.go:229] Guest: 2024-05-05 14:22:43.917671963 -0700 PDT Remote: 2024-05-05 14:22:43.719438 -0700 PDT m=+114.784889102 (delta=198.233963ms)
	I0505 14:22:43.777033   56262 fix.go:200] guest clock delta is within tolerance: 198.233963ms
	I0505 14:22:43.777036   56262 start.go:83] releasing machines lock for "ha-671000-m03", held for 37.367046714s
	I0505 14:22:43.777054   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:43.777184   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetIP
	I0505 14:22:43.798458   56262 out.go:177] * Found network options:
	I0505 14:22:43.818375   56262 out.go:177]   - NO_PROXY=192.169.0.51,192.169.0.52
	W0505 14:22:43.839196   56262 proxy.go:119] fail to check proxy env: Error ip not in block
	W0505 14:22:43.839212   56262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 14:22:43.839223   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:43.839636   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:43.839763   56262 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:22:43.839847   56262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 14:22:43.839883   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	W0505 14:22:43.839885   56262 proxy.go:119] fail to check proxy env: Error ip not in block
	W0505 14:22:43.839898   56262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 14:22:43.839953   56262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0505 14:22:43.839970   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:22:43.839989   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:43.840065   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:22:43.840123   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:43.840188   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:22:43.840221   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:43.840303   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
	I0505 14:22:43.840332   56262 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:22:43.840420   56262 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
	W0505 14:22:43.919168   56262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 14:22:43.919245   56262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 14:22:43.936501   56262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 14:22:43.936515   56262 start.go:494] detecting cgroup driver to use...
	I0505 14:22:43.936582   56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:22:43.953774   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0505 14:22:43.963068   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 14:22:43.972111   56262 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 14:22:43.972163   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 14:22:43.981147   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:22:44.011701   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 14:22:44.020897   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:22:44.030143   56262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 14:22:44.039491   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 14:22:44.048778   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 14:22:44.057937   56262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 14:22:44.067298   56262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 14:22:44.075698   56262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 14:22:44.083983   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:22:44.200980   56262 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 14:22:44.219877   56262 start.go:494] detecting cgroup driver to use...
	I0505 14:22:44.219946   56262 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 14:22:44.236639   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:22:44.254367   56262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 14:22:44.271268   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:22:44.282915   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:22:44.293466   56262 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0505 14:22:44.317181   56262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:22:44.327878   56262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:22:44.343024   56262 ssh_runner.go:195] Run: which cri-dockerd
	I0505 14:22:44.346054   56262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 14:22:44.353257   56262 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 14:22:44.367082   56262 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 14:22:44.465180   56262 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 14:22:44.569600   56262 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 14:22:44.569629   56262 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 14:22:44.584431   56262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:22:44.680947   56262 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:23:45.736510   56262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.056089884s)
	I0505 14:23:45.736595   56262 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0505 14:23:45.770790   56262 out.go:177] 
	W0505 14:23:45.791249   56262 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 05 21:22:41 ha-671000-m03 systemd[1]: Starting Docker Application Container Engine...
	May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.352208248Z" level=info msg="Starting up"
	May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.353022730Z" level=info msg="containerd not running, starting managed containerd"
	May 05 21:22:41 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:41.358767057Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.373539189Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388000547Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388073973Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388137944Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388171760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388313706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388355785Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388477111Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388518957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388551610Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388580389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388726935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.388950191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390520791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390570725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390706880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390751886Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390888815Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390940476Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.390972496Z" level=info msg="metadata content store policy set" policy=shared
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394800432Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394883868Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.394961138Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395000278Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395036706Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395111009Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395337703Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395418767Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395454129Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395484232Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395514263Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395546554Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395576938Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395607440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395641518Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395677040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395708605Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395737963Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395799761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395843188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395874408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395904381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395933636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395965927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.395995431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396033716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396067448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396098841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396127871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396155969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396184510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396215668Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396250321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396280045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396307939Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396379697Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396424577Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396475305Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396510849Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396569471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396621386Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396656010Z" level=info msg="NRI interface is disabled by configuration."
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396883316Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.396972499Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.397031244Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 05 21:22:41 ha-671000-m03 dockerd[518]: time="2024-05-05T21:22:41.397069101Z" level=info msg="containerd successfully booted in 0.024677s"
	May 05 21:22:42 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:42.379929944Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 05 21:22:42 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:42.413119848Z" level=info msg="Loading containers: start."
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.663705690Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.700545709Z" level=info msg="Loading containers: done."
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.707501270Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.707669278Z" level=info msg="Daemon has completed initialization"
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.725886686Z" level=info msg="API listen on [::]:2376"
	May 05 21:22:43 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:43.725971765Z" level=info msg="API listen on /var/run/docker.sock"
	May 05 21:22:43 ha-671000-m03 systemd[1]: Started Docker Application Container Engine.
	May 05 21:22:44 ha-671000-m03 systemd[1]: Stopping Docker Application Container Engine...
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.833114404Z" level=info msg="Processing signal 'terminated'"
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834199869Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834666188Z" level=info msg="Daemon shutdown complete"
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834695637Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 05 21:22:44 ha-671000-m03 dockerd[512]: time="2024-05-05T21:22:44.834707874Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 05 21:22:45 ha-671000-m03 systemd[1]: docker.service: Deactivated successfully.
	May 05 21:22:45 ha-671000-m03 systemd[1]: Stopped Docker Application Container Engine.
	May 05 21:22:45 ha-671000-m03 systemd[1]: Starting Docker Application Container Engine...
	May 05 21:22:45 ha-671000-m03 dockerd[1073]: time="2024-05-05T21:22:45.887265470Z" level=info msg="Starting up"
	May 05 21:23:45 ha-671000-m03 dockerd[1073]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 05 21:23:45 ha-671000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 05 21:23:45 ha-671000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 05 21:23:45 ha-671000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0505 14:23:45.791332   56262 out.go:239] * 
	W0505 14:23:45.791963   56262 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:23:45.854203   56262 out.go:177] 
	
	
	==> Docker <==
	May 05 21:22:34 ha-671000 dockerd[1136]: time="2024-05-05T21:22:34.249023707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:22:34 ha-671000 dockerd[1130]: time="2024-05-05T21:22:34.316945093Z" level=info msg="ignoring event" container=0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 21:22:34 ha-671000 dockerd[1136]: time="2024-05-05T21:22:34.317591194Z" level=info msg="shim disconnected" id=0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377 namespace=moby
	May 05 21:22:34 ha-671000 dockerd[1136]: time="2024-05-05T21:22:34.317738677Z" level=warning msg="cleaning up after shim disconnected" id=0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377 namespace=moby
	May 05 21:22:34 ha-671000 dockerd[1136]: time="2024-05-05T21:22:34.317783286Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 21:22:36 ha-671000 dockerd[1136]: time="2024-05-05T21:22:36.235098682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 21:22:36 ha-671000 dockerd[1136]: time="2024-05-05T21:22:36.235605348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 21:22:36 ha-671000 dockerd[1136]: time="2024-05-05T21:22:36.235714710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:22:36 ha-671000 dockerd[1136]: time="2024-05-05T21:22:36.235995155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:23:47 ha-671000 dockerd[1130]: 2024/05/05 21:23:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 05 21:23:47 ha-671000 dockerd[1130]: 2024/05/05 21:23:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 05 21:23:47 ha-671000 dockerd[1130]: 2024/05/05 21:23:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 05 21:23:47 ha-671000 dockerd[1130]: 2024/05/05 21:23:47 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 05 21:23:48 ha-671000 dockerd[1130]: 2024/05/05 21:23:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 05 21:23:48 ha-671000 dockerd[1130]: 2024/05/05 21:23:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 05 21:23:48 ha-671000 dockerd[1130]: 2024/05/05 21:23:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 05 21:23:48 ha-671000 dockerd[1130]: 2024/05/05 21:23:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 05 21:23:48 ha-671000 dockerd[1130]: 2024/05/05 21:23:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 05 21:23:48 ha-671000 dockerd[1130]: 2024/05/05 21:23:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 05 21:23:48 ha-671000 dockerd[1130]: 2024/05/05 21:23:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 05 21:23:49 ha-671000 dockerd[1130]: 2024/05/05 21:23:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 05 21:23:49 ha-671000 dockerd[1130]: 2024/05/05 21:23:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 05 21:23:49 ha-671000 dockerd[1130]: 2024/05/05 21:23:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 05 21:23:49 ha-671000 dockerd[1130]: 2024/05/05 21:23:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 05 21:23:49 ha-671000 dockerd[1130]: 2024/05/05 21:23:49 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4e72d733bb177       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   17013aecf8e89       coredns-7db6d8ff4d-hqtd2
	a5ba9a7a24b6f       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   5a876c8ef945c       coredns-7db6d8ff4d-kjf54
	c048dc81e6392       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   382155dbcfe93       kindnet-zvz9x
	76503e51b3afa       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   8637a9efa2c11       busybox-fc5497c4f-lfn9v
	7001a9c78d0af       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   f930d07fb2b00       kube-proxy-kppdj
	0883553982a24       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   cca445b0e122c       storage-provisioner
	64c952108db1f       c7aad43836fa5                                                                                         2 minutes ago        Running             kube-controller-manager   2                   66419f8520fde       kube-controller-manager-ha-671000
	0faa6b8c33ebd       c42f13656d0b2                                                                                         2 minutes ago        Running             kube-apiserver            1                   70fab261c2b17       kube-apiserver-ha-671000
	0c29a1524fb04       22aaebb38f4a9                                                                                         2 minutes ago        Running             kube-vip                  0                   2c44ab6fb1b45       kube-vip-ha-671000
	d51ddba3901bd       c7aad43836fa5                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   66419f8520fde       kube-controller-manager-ha-671000
	06468c7f97645       3861cfcd7c04c                                                                                         2 minutes ago        Running             etcd                      1                   7eb485f57bef9       etcd-ha-671000
	09b069cddaf09       259c8277fcbbc                                                                                         2 minutes ago        Running             kube-scheduler            1                   0b3f9b67d960c       kube-scheduler-ha-671000
	d08c19fcd330c       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago        Exited              busybox                   0                   0a3a1177976eb       busybox-fc5497c4f-lfn9v
	aa3ff28b7c901       cbb01a7bd410d                                                                                         8 minutes ago        Exited              coredns                   0                   803b42dbd6068       coredns-7db6d8ff4d-kjf54
	bfe23d4afc231       cbb01a7bd410d                                                                                         8 minutes ago        Exited              coredns                   0                   26bf6869329a0       coredns-7db6d8ff4d-hqtd2
	1a1434eaae36d       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              8 minutes ago        Exited              kindnet-cni               0                   61be6d7331d2d       kindnet-zvz9x
	2de2ad908033c       a0bf559e280cf                                                                                         8 minutes ago        Exited              kube-proxy                0                   ce98653ecf0b5       kube-proxy-kppdj
	5254e6584697c       3861cfcd7c04c                                                                                         8 minutes ago        Exited              etcd                      0                   6c18606ff8a34       etcd-ha-671000
	52585f49ef66d       c42f13656d0b2                                                                                         8 minutes ago        Exited              kube-apiserver            0                   157e6496c96d6       kube-apiserver-ha-671000
	0f13fc419c3a3       259c8277fcbbc                                                                                         8 minutes ago        Exited              kube-scheduler            0                   20d7fc1ca35c2       kube-scheduler-ha-671000
	
	
	==> coredns [4e72d733bb17] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60404 - 16395 "HINFO IN 7673949606304789129.6924752665992071371. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01220844s
	
	
	==> coredns [a5ba9a7a24b6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54698 - 36003 "HINFO IN 1073736587953336830.7574535335510144074. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015279179s
	
	
	==> coredns [aa3ff28b7c90] <==
	[INFO] 10.244.0.4:55179 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00060962s
	[INFO] 10.244.0.4:54761 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000032941s
	[INFO] 10.244.0.4:53596 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000034902s
	[INFO] 10.244.1.2:52057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008017s
	[INFO] 10.244.1.2:37246 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000039116s
	[INFO] 10.244.1.2:41412 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000078072s
	[INFO] 10.244.1.2:35969 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000042719s
	[INFO] 10.244.1.2:60012 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.000495345s
	[INFO] 10.244.1.2:57444 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000068087s
	[INFO] 10.244.1.2:56681 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071523s
	[INFO] 10.244.1.2:51095 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000038807s
	[INFO] 10.244.2.2:39666 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012061s
	[INFO] 10.244.0.4:36229 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075354s
	[INFO] 10.244.0.4:36052 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000059981s
	[INFO] 10.244.0.4:45966 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005648s
	[INFO] 10.244.0.4:40793 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010383s
	[INFO] 10.244.1.2:39020 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075539s
	[INFO] 10.244.1.2:57719 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064383s
	[INFO] 10.244.2.2:46470 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097542s
	[INFO] 10.244.2.2:54394 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123552s
	[INFO] 10.244.2.2:60319 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000056346s
	[INFO] 10.244.1.2:32801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087202s
	[INFO] 10.244.1.2:39594 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089023s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bfe23d4afc23] <==
	[INFO] 10.244.2.2:60822 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.010749854s
	[INFO] 10.244.0.4:46715 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116633s
	[INFO] 10.244.0.4:36578 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000057682s
	[INFO] 10.244.2.2:49239 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011646073s
	[INFO] 10.244.2.2:60414 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097s
	[INFO] 10.244.2.2:33426 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011533001s
	[INFO] 10.244.2.2:51459 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091142s
	[INFO] 10.244.0.4:52044 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000037728s
	[INFO] 10.244.0.4:58536 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000026924s
	[INFO] 10.244.0.4:60528 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000030891s
	[INFO] 10.244.0.4:46083 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057358s
	[INFO] 10.244.2.2:35752 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076258s
	[INFO] 10.244.2.2:52942 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063141s
	[INFO] 10.244.2.2:37055 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096791s
	[INFO] 10.244.1.2:52668 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00008334s
	[INFO] 10.244.1.2:39089 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160813s
	[INFO] 10.244.2.2:59653 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000092778s
	[INFO] 10.244.0.4:35085 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00007747s
	[INFO] 10.244.0.4:32964 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000073391s
	[INFO] 10.244.0.4:44760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077879s
	[INFO] 10.244.0.4:37758 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000071268s
	[INFO] 10.244.1.2:55625 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000061815s
	[INFO] 10.244.1.2:50908 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000064514s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-671000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-671000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_05T14_15_29_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:15:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:24:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:21:46 +0000   Sun, 05 May 2024 21:15:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:21:46 +0000   Sun, 05 May 2024 21:15:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:21:46 +0000   Sun, 05 May 2024 21:15:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:21:46 +0000   Sun, 05 May 2024 21:15:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.51
	  Hostname:    ha-671000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 3721a595f38c41b8bbd3cdb36f05098b
	  System UUID:                93894e2d-0000-0000-8cc9-aa1a138ddf96
	  Boot ID:                    844f38c6-034c-4659-bd02-e667c7e866d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lfn9v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  kube-system                 coredns-7db6d8ff4d-hqtd2             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m20s
	  kube-system                 coredns-7db6d8ff4d-kjf54             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m20s
	  kube-system                 etcd-ha-671000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m35s
	  kube-system                 kindnet-zvz9x                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m20s
	  kube-system                 kube-apiserver-ha-671000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m33s
	  kube-system                 kube-controller-manager-ha-671000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m33s
	  kube-system                 kube-proxy-kppdj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-scheduler-ha-671000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m33s
	  kube-system                 kube-vip-ha-671000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 115s                   kube-proxy       
	  Normal  Starting                 8m18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m40s (x8 over 8m40s)  kubelet          Node ha-671000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m40s (x7 over 8m40s)  kubelet          Node ha-671000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m40s (x8 over 8m40s)  kubelet          Node ha-671000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m40s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m33s                  kubelet          Node ha-671000 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m33s                  kubelet          Node ha-671000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m33s                  kubelet          Node ha-671000 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           8m21s                  node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	  Normal  NodeReady                8m11s                  kubelet          Node ha-671000 status is now: NodeReady
	  Normal  RegisteredNode           7m7s                   node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	  Normal  RegisteredNode           5m57s                  node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	  Normal  RegisteredNode           3m42s                  node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	  Normal  Starting                 2m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m51s (x8 over 2m51s)  kubelet          Node ha-671000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m51s (x8 over 2m51s)  kubelet          Node ha-671000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m51s (x7 over 2m51s)  kubelet          Node ha-671000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m11s                  node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	  Normal  RegisteredNode           2m1s                   node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	
	
	Name:               ha-671000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-671000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T14_16_38_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:16:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:23:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:21:38 +0000   Sun, 05 May 2024 21:16:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:21:38 +0000   Sun, 05 May 2024 21:16:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:21:38 +0000   Sun, 05 May 2024 21:16:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:21:38 +0000   Sun, 05 May 2024 21:16:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.52
	  Hostname:    ha-671000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd0c52403e6948f895e68f7307e07d3c
	  System UUID:                294b4d68-0000-0000-b3f3-54381951a5e8
	  Boot ID:                    afe03ef7-7b17-481f-b318-67efdc00c911
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q27t4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  kube-system                 etcd-ha-671000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m22s
	  kube-system                 kindnet-kn94d                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m24s
	  kube-system                 kube-apiserver-ha-671000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m21s
	  kube-system                 kube-controller-manager-ha-671000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	  kube-system                 kube-proxy-5jwqs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m24s
	  kube-system                 kube-scheduler-ha-671000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	  kube-system                 kube-vip-ha-671000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m20s                  kube-proxy       
	  Normal   Starting                 2m6s                   kube-proxy       
	  Normal   Starting                 3m55s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  7m24s (x8 over 7m24s)  kubelet          Node ha-671000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m24s (x8 over 7m24s)  kubelet          Node ha-671000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m24s (x7 over 7m24s)  kubelet          Node ha-671000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  7m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           7m21s                  node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	  Normal   RegisteredNode           7m7s                   node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	  Normal   RegisteredNode           5m57s                  node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	  Normal   NodeAllocatableEnforced  3m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m58s                  kubelet          Node ha-671000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m58s                  kubelet          Node ha-671000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m58s                  kubelet          Node ha-671000-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 3m58s                  kubelet          Node ha-671000-m02 has been rebooted, boot id: 4c58d033-04b8-4c15-8fdc-920ae431b3e3
	  Normal   Starting                 3m58s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           3m42s                  node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	  Normal   Starting                 2m33s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m32s (x8 over 2m32s)  kubelet          Node ha-671000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  2m32s (x8 over 2m32s)  kubelet          Node ha-671000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m32s (x7 over 2m32s)  kubelet          Node ha-671000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m11s                  node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	  Normal   RegisteredNode           2m1s                   node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	
	
	Name:               ha-671000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-671000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T14_18_38_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:18:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:20:20 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 05 May 2024 21:19:15 +0000   Sun, 05 May 2024 21:22:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 05 May 2024 21:19:15 +0000   Sun, 05 May 2024 21:22:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 05 May 2024 21:19:15 +0000   Sun, 05 May 2024 21:22:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 05 May 2024 21:19:15 +0000   Sun, 05 May 2024 21:22:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.54
	  Hostname:    ha-671000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 d4981d8834c947ca92647a836bff839f
	  System UUID:                8d0f44c8-0000-0000-aaa8-77d77d483dce
	  Boot ID:                    16c48acc-c76d-4b03-8b93-c113a1acb125
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ffg2p       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m22s
	  kube-system                 kube-proxy-b45s6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m13s                  kube-proxy       
	  Normal  NodeHasSufficientPID     5m22s (x2 over 5m22s)  kubelet          Node ha-671000-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m22s (x2 over 5m22s)  kubelet          Node ha-671000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s (x2 over 5m22s)  kubelet          Node ha-671000-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal  NodeReady                4m45s                  kubelet          Node ha-671000-m04 status is now: NodeReady
	  Normal  RegisteredNode           3m42s                  node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal  RegisteredNode           2m11s                  node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal  RegisteredNode           2m1s                   node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal  NodeNotReady             91s                    node-controller  Node ha-671000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.036177] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007984] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.371215] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006679] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.612826] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[May 5 21:21] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +2.610406] systemd-fstab-generator[479]: Ignoring "noauto" option for root device
	[  +0.095617] systemd-fstab-generator[491]: Ignoring "noauto" option for root device
	[  +1.314538] kauditd_printk_skb: 42 callbacks suppressed
	[  +0.655682] systemd-fstab-generator[1058]: Ignoring "noauto" option for root device
	[  +0.256796] systemd-fstab-generator[1096]: Ignoring "noauto" option for root device
	[  +0.100506] systemd-fstab-generator[1108]: Ignoring "noauto" option for root device
	[  +0.111570] systemd-fstab-generator[1122]: Ignoring "noauto" option for root device
	[  +2.444375] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.102765] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.091262] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.136792] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.441863] systemd-fstab-generator[1481]: Ignoring "noauto" option for root device
	[  +6.939646] kauditd_printk_skb: 276 callbacks suppressed
	[ +21.981272] kauditd_printk_skb: 40 callbacks suppressed
	[May 5 21:22] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.342141] kauditd_printk_skb: 29 callbacks suppressed
	
	
	==> etcd [06468c7f9764] <==
	{"level":"warn","ts":"2024-05-05T21:23:31.592823Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:34.152847Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:34.152977Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:36.59348Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:36.593489Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:38.154487Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:38.154534Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:41.594251Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:41.59428Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:42.155735Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:42.155924Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:46.158028Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:46.158078Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:46.594975Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:46.595025Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:50.158941Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:50.158991Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:51.5957Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:51.595712Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:54.16042Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:54.160503Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:56.595961Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:56.59603Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c5e392ded2f33250","rtt":"0s","error":"dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:58.161883Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.53:2380/version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-05-05T21:23:58.162Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"c5e392ded2f33250","error":"Get \"https://192.169.0.53:2380/version\": dial tcp 192.169.0.53:2380: connect: connection refused"}
	
	
	==> etcd [5254e6584697] <==
	{"level":"warn","ts":"2024-05-05T21:20:41.244715Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.517168037s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"","error":"context canceled"}
	2024/05/05 21:20:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-05-05T21:20:41.244728Z","caller":"traceutil/trace.go:171","msg":"trace[1070592193] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; }","duration":"7.517242865s","start":"2024-05-05T21:20:33.727481Z","end":"2024-05-05T21:20:41.244724Z","steps":["trace[1070592193] 'agreement among raft nodes before linearized reading'  (duration: 7.517229047s)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:20:41.244739Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:20:33.727472Z","time spent":"7.517264459s","remote":"127.0.0.1:52468","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	2024/05/05 21:20:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-05T21:20:41.318319Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.51:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-05T21:20:41.318441Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.51:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-05T21:20:41.318529Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"1792221d12ca7193","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-05T21:20:41.318575Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:20:41.318613Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:20:41.318632Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:20:41.318702Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1792221d12ca7193","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:20:41.318726Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1792221d12ca7193","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:20:41.318811Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1792221d12ca7193","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:20:41.318844Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:20:41.318852Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:20:41.318878Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:20:41.318893Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:20:41.319101Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:20:41.319165Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:20:41.319193Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:20:41.319239Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:20:41.320696Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.51:2380"}
	{"level":"info","ts":"2024-05-05T21:20:41.320808Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.51:2380"}
	{"level":"info","ts":"2024-05-05T21:20:41.320835Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-671000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.51:2380"],"advertise-client-urls":["https://192.169.0.51:2379"]}
	
	
	==> kernel <==
	 21:24:01 up 3 min,  0 users,  load average: 0.30, 0.28, 0.11
	Linux ha-671000 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1a1434eaae36] <==
	I0505 21:19:55.731657       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:20:05.736429       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:20:05.736525       1 main.go:227] handling current node
	I0505 21:20:05.736552       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:20:05.736689       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:20:05.736923       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0505 21:20:05.736977       1 main.go:250] Node ha-671000-m03 has CIDR [10.244.2.0/24] 
	I0505 21:20:05.737155       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:20:05.737283       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:20:15.745695       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:20:15.745995       1 main.go:227] handling current node
	I0505 21:20:15.746046       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:20:15.746126       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:20:15.746307       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0505 21:20:15.746355       1 main.go:250] Node ha-671000-m03 has CIDR [10.244.2.0/24] 
	I0505 21:20:15.746485       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:20:15.746532       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:20:25.759299       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:20:25.759513       1 main.go:227] handling current node
	I0505 21:20:25.759563       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:20:25.759608       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:20:25.759700       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0505 21:20:25.759814       1 main.go:250] Node ha-671000-m03 has CIDR [10.244.2.0/24] 
	I0505 21:20:25.759945       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:20:25.759992       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c048dc81e639] <==
	I0505 21:23:30.619254       1 main.go:250] Node ha-671000-m03 has CIDR [10.244.2.0/24] 
	I0505 21:23:30.619356       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:23:30.619383       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:23:40.633008       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:23:40.633100       1 main.go:227] handling current node
	I0505 21:23:40.633177       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:23:40.633333       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:23:40.633697       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0505 21:23:40.633810       1 main.go:250] Node ha-671000-m03 has CIDR [10.244.2.0/24] 
	I0505 21:23:40.634043       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:23:40.634273       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:23:50.639283       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:23:50.639422       1 main.go:227] handling current node
	I0505 21:23:50.639547       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:23:50.639598       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:23:50.639814       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0505 21:23:50.639865       1 main.go:250] Node ha-671000-m03 has CIDR [10.244.2.0/24] 
	I0505 21:23:50.640000       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:23:50.640050       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:24:00.652201       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:24:00.652236       1 main.go:227] handling current node
	I0505 21:24:00.652244       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:24:00.652248       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:24:00.652327       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:24:00.652354       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0faa6b8c33eb] <==
	I0505 21:21:37.291123       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0505 21:21:37.291359       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0505 21:21:37.274777       1 aggregator.go:163] waiting for initial CRD sync...
	I0505 21:21:37.375644       1 shared_informer.go:320] Caches are synced for configmaps
	I0505 21:21:37.375925       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0505 21:21:37.375971       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0505 21:21:37.377200       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0505 21:21:37.378817       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0505 21:21:37.381581       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0505 21:21:37.377409       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0505 21:21:37.381892       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0505 21:21:37.382046       1 aggregator.go:165] initial CRD sync complete...
	I0505 21:21:37.382198       1 autoregister_controller.go:141] Starting autoregister controller
	I0505 21:21:37.382286       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0505 21:21:37.382435       1 cache.go:39] Caches are synced for autoregister controller
	W0505 21:21:37.393655       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.53]
	I0505 21:21:37.416822       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0505 21:21:37.416834       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0505 21:21:37.417065       1 policy_source.go:224] refreshing policies
	I0505 21:21:37.456433       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0505 21:21:37.495739       1 controller.go:615] quota admission added evaluator for: endpoints
	I0505 21:21:37.501072       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0505 21:21:37.503150       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0505 21:21:38.282464       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0505 21:21:38.614946       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.51 192.169.0.52 192.169.0.53]
	
	
	==> kube-apiserver [52585f49ef66] <==
	W0505 21:20:41.280549       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280601       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280629       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280682       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280709       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280761       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280789       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280843       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280871       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280923       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.280951       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.281002       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.281029       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.281054       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0505 21:20:41.281265       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 1.492664ms, panicked: false, err: rpc error: code = Unknown desc = malformed header: missing HTTP content-type, panic-reason: <nil>
	W0505 21:20:41.284566       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.284618       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.284660       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.284759       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.285529       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.285564       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.285594       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:20:41.285900       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0505 21:20:41.286124       1 timeout.go:142] post-timeout activity - time-elapsed: 149.222533ms, GET "/readyz" result: <nil>
	I0505 21:20:41.286844       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-controller-manager [64c952108db1] <==
	I0505 21:22:00.453531       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0505 21:22:05.511091       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.295484ms"
	I0505 21:22:05.511370       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.644µs"
	I0505 21:22:21.210161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47µs"
	I0505 21:22:22.203561       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.395µs"
	I0505 21:22:29.671409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.983559ms"
	I0505 21:22:29.671803       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="344.603µs"
	I0505 21:22:34.895317       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.354µs"
	I0505 21:22:34.945918       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-qfwk6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-qfwk6\": the object has been modified; please apply your changes to the latest version and try again"
	I0505 21:22:34.946345       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"bea99034-e1b7-4a88-8a06-fbc74abeaaf9", APIVersion:"v1", ResourceVersion:"296", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-qfwk6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-qfwk6": the object has been modified; please apply your changes to the latest version and try again
	I0505 21:22:34.949671       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.865154ms"
	I0505 21:22:34.950019       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.905µs"
	I0505 21:22:36.927342       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="78.051µs"
	I0505 21:22:36.944792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.116942ms"
	I0505 21:22:36.945091       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.255µs"
	I0505 21:23:50.451591       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.67394ms"
	I0505 21:23:50.507292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.495393ms"
	I0505 21:23:50.514293       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.956564ms"
	I0505 21:23:50.514641       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.293µs"
	E0505 21:23:59.870289       1 gc_controller.go:153] "Failed to get node" err="node \"ha-671000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671000-m03"
	E0505 21:23:59.870413       1 gc_controller.go:153] "Failed to get node" err="node \"ha-671000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671000-m03"
	E0505 21:23:59.870433       1 gc_controller.go:153] "Failed to get node" err="node \"ha-671000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671000-m03"
	E0505 21:23:59.870445       1 gc_controller.go:153] "Failed to get node" err="node \"ha-671000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671000-m03"
	E0505 21:23:59.870456       1 gc_controller.go:153] "Failed to get node" err="node \"ha-671000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671000-m03"
	E0505 21:23:59.870466       1 gc_controller.go:153] "Failed to get node" err="node \"ha-671000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-671000-m03"
	
	
	==> kube-controller-manager [d51ddba3901b] <==
	I0505 21:21:17.233998       1 serving.go:380] Generated self-signed cert in-memory
	I0505 21:21:17.699254       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0505 21:21:17.699295       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:21:17.702300       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0505 21:21:17.704596       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0505 21:21:17.704681       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0505 21:21:17.704829       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0505 21:21:37.707829       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-n
amespaces-controller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-proxy [2de2ad908033] <==
	I0505 21:15:42.197467       1 server_linux.go:69] "Using iptables proxy"
	I0505 21:15:42.206342       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.51"]
	I0505 21:15:42.233495       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 21:15:42.233528       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 21:15:42.233540       1 server_linux.go:165] "Using iptables Proxier"
	I0505 21:15:42.235848       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 21:15:42.236234       1 server.go:872] "Version info" version="v1.30.0"
	I0505 21:15:42.236321       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:15:42.237244       1 config.go:101] "Starting endpoint slice config controller"
	I0505 21:15:42.237489       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 21:15:42.237528       1 config.go:192] "Starting service config controller"
	I0505 21:15:42.237533       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 21:15:42.237620       1 config.go:319] "Starting node config controller"
	I0505 21:15:42.237748       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0505 21:15:42.338371       1 shared_informer.go:320] Caches are synced for service config
	I0505 21:15:42.338453       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0505 21:15:42.338567       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7001a9c78d0a] <==
	I0505 21:22:05.427749       1 server_linux.go:69] "Using iptables proxy"
	I0505 21:22:05.441644       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.51"]
	I0505 21:22:05.545461       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 21:22:05.545682       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 21:22:05.545778       1 server_linux.go:165] "Using iptables Proxier"
	I0505 21:22:05.548756       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 21:22:05.549189       1 server.go:872] "Version info" version="v1.30.0"
	I0505 21:22:05.549278       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:22:05.551545       1 config.go:192] "Starting service config controller"
	I0505 21:22:05.551674       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 21:22:05.551761       1 config.go:101] "Starting endpoint slice config controller"
	I0505 21:22:05.551848       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 21:22:05.552969       1 config.go:319] "Starting node config controller"
	I0505 21:22:05.553109       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0505 21:22:05.652764       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0505 21:22:05.652801       1 shared_informer.go:320] Caches are synced for service config
	I0505 21:22:05.653231       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [09b069cddaf0] <==
	I0505 21:21:17.140666       1 serving.go:380] Generated self-signed cert in-memory
	W0505 21:21:27.959721       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.169.0.51:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0505 21:21:27.959770       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0505 21:21:27.959776       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0505 21:21:37.325220       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0505 21:21:37.325291       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:21:37.336314       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0505 21:21:37.337352       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0505 21:21:37.337505       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0505 21:21:37.341283       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0505 21:21:37.438307       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [0f13fc419c3a] <==
	I0505 21:18:38.425370       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ffg2p" node="ha-671000-m04"
	E0505 21:18:38.428127       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tgdtz\": pod kube-proxy-tgdtz is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tgdtz" node="ha-671000-m04"
	E0505 21:18:38.428397       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f5f9b9e4-4771-49af-a1e4-37910d8267a4(kube-system/kube-proxy-tgdtz) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-tgdtz"
	E0505 21:18:38.428585       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tgdtz\": pod kube-proxy-tgdtz is already assigned to node \"ha-671000-m04\"" pod="kube-system/kube-proxy-tgdtz"
	I0505 21:18:38.428695       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tgdtz" node="ha-671000-m04"
	E0505 21:18:38.442949       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-66l5l\": pod kindnet-66l5l is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-66l5l" node="ha-671000-m04"
	E0505 21:18:38.443283       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 4f688ff7-efff-4775-9a88-d954e81852f5(kube-system/kindnet-66l5l) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-66l5l"
	E0505 21:18:38.443527       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-66l5l\": pod kindnet-66l5l is already assigned to node \"ha-671000-m04\"" pod="kube-system/kindnet-66l5l"
	I0505 21:18:38.443685       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-66l5l" node="ha-671000-m04"
	E0505 21:18:38.443578       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-xvf68\": pod kube-proxy-xvf68 is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-xvf68" node="ha-671000-m04"
	E0505 21:18:38.444183       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 24a52ab7-73e5-4d91-810b-a2260dae577f(kube-system/kube-proxy-xvf68) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-xvf68"
	E0505 21:18:38.444289       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-xvf68\": pod kube-proxy-xvf68 is already assigned to node \"ha-671000-m04\"" pod="kube-system/kube-proxy-xvf68"
	I0505 21:18:38.444408       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-xvf68" node="ha-671000-m04"
	E0505 21:18:38.489548       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-sbspd\": pod kindnet-sbspd is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-sbspd" node="ha-671000-m04"
	E0505 21:18:38.489803       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod afb510c4-ddf4-4844-bdf5-80343510ecb8(kube-system/kindnet-sbspd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-sbspd"
	E0505 21:18:38.490102       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-sbspd\": pod kindnet-sbspd is already assigned to node \"ha-671000-m04\"" pod="kube-system/kindnet-sbspd"
	I0505 21:18:38.490296       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-sbspd" node="ha-671000-m04"
	E0505 21:18:38.499960       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rldf7\": pod kube-proxy-rldf7 is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rldf7" node="ha-671000-m04"
	E0505 21:18:38.500590       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f733f40c-9915-44e5-8f24-9f4101633739(kube-system/kube-proxy-rldf7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-rldf7"
	E0505 21:18:38.501561       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rldf7\": pod kube-proxy-rldf7 is already assigned to node \"ha-671000-m04\"" pod="kube-system/kube-proxy-rldf7"
	I0505 21:18:38.501767       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rldf7" node="ha-671000-m04"
	E0505 21:18:40.483901       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fntvj\": pod kube-proxy-fntvj is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fntvj" node="ha-671000-m04"
	E0505 21:18:40.483990       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fntvj\": pod kube-proxy-fntvj is already assigned to node \"ha-671000-m04\"" pod="kube-system/kube-proxy-fntvj"
	I0505 21:18:40.484875       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fntvj" node="ha-671000-m04"
	E0505 21:20:41.266642       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 05 21:22:21 ha-671000 kubelet[1488]: E0505 21:22:21.192254    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-7db6d8ff4d-kjf54_kube-system(c780145e-9d82-4451-94e8-dee09a63eadb)\"" pod="kube-system/coredns-7db6d8ff4d-kjf54" podUID="c780145e-9d82-4451-94e8-dee09a63eadb"
	May 05 21:22:22 ha-671000 kubelet[1488]: I0505 21:22:22.192271    1488 scope.go:117] "RemoveContainer" containerID="bfe23d4afc2313a26ae10b34970e899d74fe1e0f1c01bf9df2058c578bac6bf1"
	May 05 21:22:22 ha-671000 kubelet[1488]: E0505 21:22:22.192522    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=coredns pod=coredns-7db6d8ff4d-hqtd2_kube-system(e76b43f2-8189-4e5d-adc3-ced554e9ee07)\"" pod="kube-system/coredns-7db6d8ff4d-hqtd2" podUID="e76b43f2-8189-4e5d-adc3-ced554e9ee07"
	May 05 21:22:34 ha-671000 kubelet[1488]: I0505 21:22:34.191629    1488 scope.go:117] "RemoveContainer" containerID="aa3ff28b7c9017843d8d888a429ee706bd6460febccb79e8787320e99efbdfa4"
	May 05 21:22:34 ha-671000 kubelet[1488]: I0505 21:22:34.865379    1488 scope.go:117] "RemoveContainer" containerID="797ed8f77f01f6ba02573542d48c7a31705a8fe5b3efed78400f7de2a56d9358"
	May 05 21:22:34 ha-671000 kubelet[1488]: I0505 21:22:34.865674    1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
	May 05 21:22:34 ha-671000 kubelet[1488]: E0505 21:22:34.865777    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
	May 05 21:22:36 ha-671000 kubelet[1488]: I0505 21:22:36.192222    1488 scope.go:117] "RemoveContainer" containerID="bfe23d4afc2313a26ae10b34970e899d74fe1e0f1c01bf9df2058c578bac6bf1"
	May 05 21:22:49 ha-671000 kubelet[1488]: I0505 21:22:49.192583    1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
	May 05 21:22:49 ha-671000 kubelet[1488]: E0505 21:22:49.193087    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
	May 05 21:23:02 ha-671000 kubelet[1488]: I0505 21:23:02.191713    1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
	May 05 21:23:02 ha-671000 kubelet[1488]: E0505 21:23:02.192199    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
	May 05 21:23:09 ha-671000 kubelet[1488]: E0505 21:23:09.208918    1488 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:23:09 ha-671000 kubelet[1488]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:23:09 ha-671000 kubelet[1488]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:23:09 ha-671000 kubelet[1488]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:23:09 ha-671000 kubelet[1488]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:23:14 ha-671000 kubelet[1488]: I0505 21:23:14.191788    1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
	May 05 21:23:14 ha-671000 kubelet[1488]: E0505 21:23:14.192304    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
	May 05 21:23:29 ha-671000 kubelet[1488]: I0505 21:23:29.193869    1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
	May 05 21:23:29 ha-671000 kubelet[1488]: E0505 21:23:29.194441    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
	May 05 21:23:40 ha-671000 kubelet[1488]: I0505 21:23:40.191896    1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
	May 05 21:23:40 ha-671000 kubelet[1488]: E0505 21:23:40.192265    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
	May 05 21:23:54 ha-671000 kubelet[1488]: I0505 21:23:54.192017    1488 scope.go:117] "RemoveContainer" containerID="0883553982a241f488903e055233ed6a4dfbe67c9c169cefdef804a82cfba377"
	May 05 21:23:54 ha-671000 kubelet[1488]: E0505 21:23:54.192461    1488 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f376315c-5f9b-46f4-b295-6d7d025063bc)\"" pod="kube-system/storage-provisioner" podUID="f376315c-5f9b-46f4-b295-6d7d025063bc"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-671000 -n ha-671000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-671000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-zc2ns
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-671000 describe pod busybox-fc5497c4f-zc2ns
helpers_test.go:282: (dbg) kubectl --context ha-671000 describe pod busybox-fc5497c4f-zc2ns:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-zc2ns
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fr5s9 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-fr5s9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  12s   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  12s   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (13.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (427.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-671000 --control-plane -v=7 --alsologtostderr
E0505 14:27:31.481073   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:28:23.838550   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 14:32:31.477812   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:33:23.836045   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p ha-671000 --control-plane -v=7 --alsologtostderr: exit status 80 (7m3.315430083s)

                                                
                                                
-- stdout --
	* Adding node m05 to cluster ha-671000 as [worker control-plane]
	* Starting "ha-671000-m05" control-plane node in "ha-671000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:27:22.525702   56490 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:27:22.525904   56490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:27:22.525909   56490 out.go:304] Setting ErrFile to fd 2...
	I0505 14:27:22.525913   56490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:27:22.526085   56490 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
	I0505 14:27:22.526437   56490 mustload.go:65] Loading cluster: ha-671000
	I0505 14:27:22.526750   56490 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:27:22.527081   56490 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:27:22.527129   56490 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:27:22.535336   56490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58201
	I0505 14:27:22.535750   56490 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:27:22.536170   56490 main.go:141] libmachine: Using API Version  1
	I0505 14:27:22.536179   56490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:27:22.536391   56490 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:27:22.536493   56490 main.go:141] libmachine: (ha-671000) Calling .GetState
	I0505 14:27:22.536580   56490 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:27:22.536646   56490 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 56435
	I0505 14:27:22.537610   56490 host.go:66] Checking if "ha-671000" exists ...
	I0505 14:27:22.537847   56490 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:27:22.537866   56490 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:27:22.546330   56490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58203
	I0505 14:27:22.546653   56490 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:27:22.546975   56490 main.go:141] libmachine: Using API Version  1
	I0505 14:27:22.546983   56490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:27:22.547221   56490 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:27:22.547332   56490 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:27:22.547654   56490 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:27:22.547675   56490 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:27:22.556201   56490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58205
	I0505 14:27:22.556560   56490 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:27:22.556882   56490 main.go:141] libmachine: Using API Version  1
	I0505 14:27:22.556893   56490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:27:22.557144   56490 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:27:22.557260   56490 main.go:141] libmachine: (ha-671000-m02) Calling .GetState
	I0505 14:27:22.557350   56490 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:27:22.557427   56490 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid from json: 56442
	I0505 14:27:22.558391   56490 host.go:66] Checking if "ha-671000-m02" exists ...
	I0505 14:27:22.558640   56490 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:27:22.558661   56490 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:27:22.567087   56490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58207
	I0505 14:27:22.567423   56490 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:27:22.567774   56490 main.go:141] libmachine: Using API Version  1
	I0505 14:27:22.567792   56490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:27:22.567979   56490 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:27:22.568102   56490 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:27:22.568199   56490 api_server.go:166] Checking apiserver status ...
	I0505 14:27:22.568260   56490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:27:22.568278   56490 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:27:22.568382   56490 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:27:22.568458   56490 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:27:22.568555   56490 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:27:22.568634   56490 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:27:22.607493   56490 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2033/cgroup
	W0505 14:27:22.616282   56490 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2033/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 14:27:22.616335   56490 ssh_runner.go:195] Run: ls
	I0505 14:27:22.619368   56490 api_server.go:253] Checking apiserver healthz at https://192.169.0.51:8443/healthz ...
	I0505 14:27:22.623531   56490 api_server.go:279] https://192.169.0.51:8443/healthz returned 200:
	ok
	I0505 14:27:22.645212   56490 out.go:177] * Adding node m05 to cluster ha-671000 as [worker control-plane]
	I0505 14:27:22.666055   56490 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:27:22.666148   56490 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:27:22.687951   56490 out.go:177] * Starting "ha-671000-m05" control-plane node in "ha-671000" cluster
	I0505 14:27:22.708901   56490 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:27:22.708946   56490 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0505 14:27:22.708964   56490 cache.go:56] Caching tarball of preloaded images
	I0505 14:27:22.709091   56490 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0505 14:27:22.709100   56490 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:27:22.709172   56490 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:27:22.709697   56490 start.go:360] acquireMachinesLock for ha-671000-m05: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:27:22.709772   56490 start.go:364] duration metric: took 58.774µs to acquireMachinesLock for "ha-671000-m05"
	I0505 14:27:22.709792   56490 start.go:93] Provisioning new machine with config: &{Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m05 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:fa
lse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m05 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:true Worker:true}
	I0505 14:27:22.709901   56490 start.go:125] createHost starting for "m05" (driver="hyperkit")
	I0505 14:27:22.730811   56490 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0505 14:27:22.730959   56490 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:27:22.730984   56490 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:27:22.739576   56490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58211
	I0505 14:27:22.739923   56490 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:27:22.740343   56490 main.go:141] libmachine: Using API Version  1
	I0505 14:27:22.740360   56490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:27:22.740583   56490 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:27:22.740692   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetMachineName
	I0505 14:27:22.740786   56490 main.go:141] libmachine: (ha-671000-m05) Calling .DriverName
	I0505 14:27:22.740894   56490 start.go:159] libmachine.API.Create for "ha-671000" (driver="hyperkit")
	I0505 14:27:22.740918   56490 client.go:168] LocalClient.Create starting
	I0505 14:27:22.740951   56490 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem
	I0505 14:27:22.741007   56490 main.go:141] libmachine: Decoding PEM data...
	I0505 14:27:22.741021   56490 main.go:141] libmachine: Parsing certificate...
	I0505 14:27:22.741070   56490 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem
	I0505 14:27:22.741108   56490 main.go:141] libmachine: Decoding PEM data...
	I0505 14:27:22.741121   56490 main.go:141] libmachine: Parsing certificate...
	I0505 14:27:22.741143   56490 main.go:141] libmachine: Running pre-create checks...
	I0505 14:27:22.741152   56490 main.go:141] libmachine: (ha-671000-m05) Calling .PreCreateCheck
	I0505 14:27:22.741228   56490 main.go:141] libmachine: (ha-671000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:27:22.741281   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetConfigRaw
	I0505 14:27:22.741737   56490 main.go:141] libmachine: Creating machine...
	I0505 14:27:22.741746   56490 main.go:141] libmachine: (ha-671000-m05) Calling .Create
	I0505 14:27:22.741811   56490 main.go:141] libmachine: (ha-671000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:27:22.741933   56490 main.go:141] libmachine: (ha-671000-m05) DBG | I0505 14:27:22.741808   56497 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/18602-53665/.minikube
	I0505 14:27:22.741991   56490 main.go:141] libmachine: (ha-671000-m05) Downloading /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-53665/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0505 14:27:22.969131   56490 main.go:141] libmachine: (ha-671000-m05) DBG | I0505 14:27:22.969070   56497 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/id_rsa...
	I0505 14:27:23.339945   56490 main.go:141] libmachine: (ha-671000-m05) DBG | I0505 14:27:23.339878   56497 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/ha-671000-m05.rawdisk...
	I0505 14:27:23.339966   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Writing magic tar header
	I0505 14:27:23.339977   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Writing SSH key tar header
	I0505 14:27:23.340315   56490 main.go:141] libmachine: (ha-671000-m05) DBG | I0505 14:27:23.340275   56497 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05 ...
	I0505 14:27:23.701329   56490 main.go:141] libmachine: (ha-671000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:27:23.701350   56490 main.go:141] libmachine: (ha-671000-m05) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/hyperkit.pid
	I0505 14:27:23.701396   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Using UUID eeb50b58-13a3-4e5d-8b0f-0aff3f309a5b
	I0505 14:27:23.727609   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Generated MAC f6:52:28:c9:d6:ef
	I0505 14:27:23.727626   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
	I0505 14:27:23.727656   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:23 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"eeb50b58-13a3-4e5d-8b0f-0aff3f309a5b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:27:23.727685   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:23 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"eeb50b58-13a3-4e5d-8b0f-0aff3f309a5b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:27:23.727731   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:23 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "eeb50b58-13a3-4e5d-8b0f-0aff3f309a5b", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/ha-671000-m05.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/
machines/ha-671000-m05/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
	I0505 14:27:23.727770   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:23 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U eeb50b58-13a3-4e5d-8b0f-0aff3f309a5b -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/ha-671000-m05.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
	I0505 14:27:23.727790   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:23 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0505 14:27:23.730691   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:23 DEBUG: hyperkit: Pid is 56500
	I0505 14:27:23.731131   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Attempt 0
	I0505 14:27:23.731161   56490 main.go:141] libmachine: (ha-671000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:27:23.731216   56490 main.go:141] libmachine: (ha-671000-m05) DBG | hyperkit pid from json: 56500
	I0505 14:27:23.732171   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Searching for f6:52:28:c9:d6:ef in /var/db/dhcpd_leases ...
	I0505 14:27:23.732289   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0505 14:27:23.732315   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x66394b0c}
	I0505 14:27:23.732338   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394aea}
	I0505 14:27:23.732360   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x66394ad8}
	I0505 14:27:23.732374   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x66394a07}
	I0505 14:27:23.732387   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:52:c5:3c:9d:14:e ID:1,52:c5:3c:9d:14:e Lease:0x663946c7}
	I0505 14:27:23.732401   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:e2:e0:ed:33:dd:5e ID:1,e2:e0:ed:33:dd:5e Lease:0x6637f4a5}
	I0505 14:27:23.732411   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:36:8b:71:f2:1a:8e ID:1,36:8b:71:f2:1a:8e Lease:0x66394428}
	I0505 14:27:23.732424   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:aa:f4:65:9c:ae:46 ID:1,aa:f4:65:9c:ae:46 Lease:0x66391eb5}
	I0505 14:27:23.732440   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:fe:3a:ee:70:a1:f4 ID:1,fe:3a:ee:70:a1:f4 Lease:0x66391e3b}
	I0505 14:27:23.732451   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:a6:ae:ed:1a:27:3 ID:1,a6:ae:ed:1a:27:3 Lease:0x66391d28}
	I0505 14:27:23.732468   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:7e:2a:b5:61:3a:bd ID:1,7e:2a:b5:61:3a:bd Lease:0x66391c4f}
	I0505 14:27:23.732482   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:d6:14:fe:95:d1:bb ID:1,d6:14:fe:95:d1:bb Lease:0x66391c1d}
	I0505 14:27:23.732504   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:2a:be:b1:4e:ea:a ID:1,2a:be:b1:4e:ea:a Lease:0x66391b62}
	I0505 14:27:23.732524   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:da:9f:e2:ec:3c:9b ID:1,da:9f:e2:ec:3c:9b Lease:0x66391b0a}
	I0505 14:27:23.732537   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:d6:4e:a2:a0:55:dc ID:1,d6:4e:a2:a0:55:dc Lease:0x66391af3}
	I0505 14:27:23.732549   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:a6:e3:ff:16:85:55 ID:1,a6:e3:ff:16:85:55 Lease:0x66391a30}
	I0505 14:27:23.732560   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:7a:f6:c5:4:75:e5 ID:1,7a:f6:c5:4:75:e5 Lease:0x66391a1f}
	I0505 14:27:23.732569   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:8e:6b:8b:ff:c9:7 ID:1,8e:6b:8b:ff:c9:7 Lease:0x663919de}
	I0505 14:27:23.732580   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:2:e2:2:6b:6b:57 ID:1,2:e2:2:6b:6b:57 Lease:0x663919c2}
	I0505 14:27:23.732588   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:5a:ae:9c:a1:2c:f3 ID:1,5a:ae:9c:a1:2c:f3 Lease:0x66391973}
	I0505 14:27:23.732596   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:d2:88:f3:9:fe:cd ID:1,d2:88:f3:9:fe:cd Lease:0x66391947}
	I0505 14:27:23.732604   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:66:50:86:0:9c:7f ID:1,66:50:86:0:9c:7f Lease:0x6637c7bc}
	I0505 14:27:23.732611   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:3a:9c:ad:a2:3:6a ID:1,3a:9c:ad:a2:3:6a Lease:0x6637c78e}
	I0505 14:27:23.732618   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:12:b5:58:55:5e:71 ID:1,12:b5:58:55:5e:71 Lease:0x663918d9}
	I0505 14:27:23.732626   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:76:18:d8:55:2a:95 ID:1,76:18:d8:55:2a:95 Lease:0x663918bb}
	I0505 14:27:23.732635   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:82:1f:3c:c0:72:91 ID:1,82:1f:3c:c0:72:91 Lease:0x66391897}
	I0505 14:27:23.732645   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ca:d8:83:72:cb:c4 ID:1,ca:d8:83:72:cb:c4 Lease:0x6639183f}
	I0505 14:27:23.732652   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:32:fd:f9:e7:8:bb ID:1,32:fd:f9:e7:8:bb Lease:0x66391784}
	I0505 14:27:23.732665   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ae:83:a6:bf:23:4b ID:1,ae:83:a6:bf:23:4b Lease:0x66391760}
	I0505 14:27:23.732675   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:82:cc:83:cb:6c:47 ID:1,82:cc:83:cb:6c:47 Lease:0x66391734}
	I0505 14:27:23.732684   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:4a:af:1b:be:ef:ad ID:1,4a:af:1b:be:ef:ad Lease:0x6639170a}
	I0505 14:27:23.732693   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:e6:9b:2e:f9:86:de ID:1,e6:9b:2e:f9:86:de Lease:0x6637c5f9}
	I0505 14:27:23.732703   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:46:a7:e6:29:52:8c ID:1,46:a7:e6:29:52:8c Lease:0x663916cd}
	I0505 14:27:23.732713   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:b2:82:eb:21:cf:b7 ID:1,b2:82:eb:21:cf:b7 Lease:0x6639165f}
	I0505 14:27:23.732721   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:52:d3:86:65:d6:a9 ID:1,52:d3:86:65:d6:a9 Lease:0x663915f0}
	I0505 14:27:23.732730   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:7e:b9:b1:4a:c5:24 ID:1,7e:b9:b1:4a:c5:24 Lease:0x663915c3}
	I0505 14:27:23.732737   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:a:24:1:17:3a:66 ID:1,a:24:1:17:3a:66 Lease:0x6637c34e}
	I0505 14:27:23.732744   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:c5:38:c6:61:9a ID:1,9e:c5:38:c6:61:9a Lease:0x66391514}
	I0505 14:27:23.732761   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:8a:2f:87:5c:8d:d4 ID:1,8a:2f:87:5c:8d:d4 Lease:0x663914ea}
	I0505 14:27:23.732770   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9e:54:9b:5b:ae:fb ID:1,9e:54:9b:5b:ae:fb Lease:0x66390dcf}
	I0505 14:27:23.732783   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:12:1b:d2:24:1f:f6 ID:1,12:1b:d2:24:1f:f6 Lease:0x6637bc3f}
	I0505 14:27:23.732794   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f6:e7:d9:1:6b:86 ID:1,f6:e7:d9:1:6b:86 Lease:0x66390d76}
	I0505 14:27:23.732803   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:d6:bd:9c:d7:d7 ID:1,d6:d6:bd:9c:d7:d7 Lease:0x66390cbd}
	I0505 14:27:23.732810   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:b2:a5:b1:29:d1:85 ID:1,b2:a5:b1:29:d1:85 Lease:0x6637bb33}
	I0505 14:27:23.732817   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:36:67:2:e8:f4:c1 ID:1,36:67:2:e8:f4:c1 Lease:0x66390c34}
	I0505 14:27:23.732825   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:32:7a:24:40:4c:e9 ID:1,32:7a:24:40:4c:e9 Lease:0x66390c1a}
	I0505 14:27:23.732837   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:8e:b1:47:5c:8f:bb ID:1,8e:b1:47:5c:8f:bb Lease:0x6637ba42}
	I0505 14:27:23.732849   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:42:38:dd:54:2a:ca ID:1,42:38:dd:54:2a:ca Lease:0x66390bf8}
	I0505 14:27:23.732862   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b6:c9:cf:d:ee:b0 ID:1,b6:c9:cf:d:ee:b0 Lease:0x66390be6}
	I0505 14:27:23.732875   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4e:81:5b:49:bb:17 ID:1,4e:81:5b:49:bb:17 Lease:0x663907ac}
	I0505 14:27:23.732886   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:1e:72:bc:80:53:85 ID:1,1e:72:bc:80:53:85 Lease:0x6637b58a}
	I0505 14:27:23.732897   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:4a:8e:d2:30:61 ID:1,de:4a:8e:d2:30:61 Lease:0x663905e7}
	I0505 14:27:23.732909   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x663943e8}
	I0505 14:27:23.738775   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:23 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0505 14:27:23.747137   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:23 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0505 14:27:23.747893   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:27:23.747917   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:27:23.747940   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:27:23.747970   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:27:24.135410   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0505 14:27:24.135426   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0505 14:27:24.250129   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:27:24.250149   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:27:24.250157   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:27:24.250164   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:27:24.250984   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0505 14:27:24.250994   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0505 14:27:25.734186   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Attempt 1
	I0505 14:27:25.734202   56490 main.go:141] libmachine: (ha-671000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:27:25.734315   56490 main.go:141] libmachine: (ha-671000-m05) DBG | hyperkit pid from json: 56500
	I0505 14:27:25.735117   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Searching for f6:52:28:c9:d6:ef in /var/db/dhcpd_leases ...
	I0505 14:27:25.735225   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0505 14:27:25.735235   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x66394b0c}
	I0505 14:27:25.735262   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394aea}
	I0505 14:27:25.735279   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x66394ad8}
	I0505 14:27:25.735291   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x66394a07}
	I0505 14:27:25.735304   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:52:c5:3c:9d:14:e ID:1,52:c5:3c:9d:14:e Lease:0x663946c7}
	I0505 14:27:25.735322   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:e2:e0:ed:33:dd:5e ID:1,e2:e0:ed:33:dd:5e Lease:0x6637f4a5}
	I0505 14:27:25.735331   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:36:8b:71:f2:1a:8e ID:1,36:8b:71:f2:1a:8e Lease:0x66394428}
	I0505 14:27:25.735338   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:aa:f4:65:9c:ae:46 ID:1,aa:f4:65:9c:ae:46 Lease:0x66391eb5}
	I0505 14:27:25.735344   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:fe:3a:ee:70:a1:f4 ID:1,fe:3a:ee:70:a1:f4 Lease:0x66391e3b}
	I0505 14:27:25.735356   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:a6:ae:ed:1a:27:3 ID:1,a6:ae:ed:1a:27:3 Lease:0x66391d28}
	I0505 14:27:25.735363   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:7e:2a:b5:61:3a:bd ID:1,7e:2a:b5:61:3a:bd Lease:0x66391c4f}
	I0505 14:27:25.735369   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:d6:14:fe:95:d1:bb ID:1,d6:14:fe:95:d1:bb Lease:0x66391c1d}
	I0505 14:27:25.735376   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:2a:be:b1:4e:ea:a ID:1,2a:be:b1:4e:ea:a Lease:0x66391b62}
	I0505 14:27:25.735385   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:da:9f:e2:ec:3c:9b ID:1,da:9f:e2:ec:3c:9b Lease:0x66391b0a}
	I0505 14:27:25.735395   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:d6:4e:a2:a0:55:dc ID:1,d6:4e:a2:a0:55:dc Lease:0x66391af3}
	I0505 14:27:25.735403   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:a6:e3:ff:16:85:55 ID:1,a6:e3:ff:16:85:55 Lease:0x66391a30}
	I0505 14:27:25.735411   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:7a:f6:c5:4:75:e5 ID:1,7a:f6:c5:4:75:e5 Lease:0x66391a1f}
	I0505 14:27:25.735418   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:8e:6b:8b:ff:c9:7 ID:1,8e:6b:8b:ff:c9:7 Lease:0x663919de}
	I0505 14:27:25.735424   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:2:e2:2:6b:6b:57 ID:1,2:e2:2:6b:6b:57 Lease:0x663919c2}
	I0505 14:27:25.735432   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:5a:ae:9c:a1:2c:f3 ID:1,5a:ae:9c:a1:2c:f3 Lease:0x66391973}
	I0505 14:27:25.735445   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:d2:88:f3:9:fe:cd ID:1,d2:88:f3:9:fe:cd Lease:0x66391947}
	I0505 14:27:25.735456   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:66:50:86:0:9c:7f ID:1,66:50:86:0:9c:7f Lease:0x6637c7bc}
	I0505 14:27:25.735464   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:3a:9c:ad:a2:3:6a ID:1,3a:9c:ad:a2:3:6a Lease:0x6637c78e}
	I0505 14:27:25.735472   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:12:b5:58:55:5e:71 ID:1,12:b5:58:55:5e:71 Lease:0x663918d9}
	I0505 14:27:25.735526   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:76:18:d8:55:2a:95 ID:1,76:18:d8:55:2a:95 Lease:0x663918bb}
	I0505 14:27:25.735556   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:82:1f:3c:c0:72:91 ID:1,82:1f:3c:c0:72:91 Lease:0x66391897}
	I0505 14:27:25.735566   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ca:d8:83:72:cb:c4 ID:1,ca:d8:83:72:cb:c4 Lease:0x6639183f}
	I0505 14:27:25.735572   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:32:fd:f9:e7:8:bb ID:1,32:fd:f9:e7:8:bb Lease:0x66391784}
	I0505 14:27:25.735578   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ae:83:a6:bf:23:4b ID:1,ae:83:a6:bf:23:4b Lease:0x66391760}
	I0505 14:27:25.735585   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:82:cc:83:cb:6c:47 ID:1,82:cc:83:cb:6c:47 Lease:0x66391734}
	I0505 14:27:25.735591   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:4a:af:1b:be:ef:ad ID:1,4a:af:1b:be:ef:ad Lease:0x6639170a}
	I0505 14:27:25.735603   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:e6:9b:2e:f9:86:de ID:1,e6:9b:2e:f9:86:de Lease:0x6637c5f9}
	I0505 14:27:25.735620   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:46:a7:e6:29:52:8c ID:1,46:a7:e6:29:52:8c Lease:0x663916cd}
	I0505 14:27:25.735634   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:b2:82:eb:21:cf:b7 ID:1,b2:82:eb:21:cf:b7 Lease:0x6639165f}
	I0505 14:27:25.735653   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:52:d3:86:65:d6:a9 ID:1,52:d3:86:65:d6:a9 Lease:0x663915f0}
	I0505 14:27:25.735661   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:7e:b9:b1:4a:c5:24 ID:1,7e:b9:b1:4a:c5:24 Lease:0x663915c3}
	I0505 14:27:25.735668   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:a:24:1:17:3a:66 ID:1,a:24:1:17:3a:66 Lease:0x6637c34e}
	I0505 14:27:25.735676   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:c5:38:c6:61:9a ID:1,9e:c5:38:c6:61:9a Lease:0x66391514}
	I0505 14:27:25.735685   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:8a:2f:87:5c:8d:d4 ID:1,8a:2f:87:5c:8d:d4 Lease:0x663914ea}
	I0505 14:27:25.735694   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9e:54:9b:5b:ae:fb ID:1,9e:54:9b:5b:ae:fb Lease:0x66390dcf}
	I0505 14:27:25.735702   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:12:1b:d2:24:1f:f6 ID:1,12:1b:d2:24:1f:f6 Lease:0x6637bc3f}
	I0505 14:27:25.735709   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f6:e7:d9:1:6b:86 ID:1,f6:e7:d9:1:6b:86 Lease:0x66390d76}
	I0505 14:27:25.735719   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:d6:bd:9c:d7:d7 ID:1,d6:d6:bd:9c:d7:d7 Lease:0x66390cbd}
	I0505 14:27:25.735726   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:b2:a5:b1:29:d1:85 ID:1,b2:a5:b1:29:d1:85 Lease:0x6637bb33}
	I0505 14:27:25.735734   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:36:67:2:e8:f4:c1 ID:1,36:67:2:e8:f4:c1 Lease:0x66390c34}
	I0505 14:27:25.735741   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:32:7a:24:40:4c:e9 ID:1,32:7a:24:40:4c:e9 Lease:0x66390c1a}
	I0505 14:27:25.735749   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:8e:b1:47:5c:8f:bb ID:1,8e:b1:47:5c:8f:bb Lease:0x6637ba42}
	I0505 14:27:25.735758   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:42:38:dd:54:2a:ca ID:1,42:38:dd:54:2a:ca Lease:0x66390bf8}
	I0505 14:27:25.735764   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b6:c9:cf:d:ee:b0 ID:1,b6:c9:cf:d:ee:b0 Lease:0x66390be6}
	I0505 14:27:25.735772   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4e:81:5b:49:bb:17 ID:1,4e:81:5b:49:bb:17 Lease:0x663907ac}
	I0505 14:27:25.735782   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:1e:72:bc:80:53:85 ID:1,1e:72:bc:80:53:85 Lease:0x6637b58a}
	I0505 14:27:25.735790   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:4a:8e:d2:30:61 ID:1,de:4a:8e:d2:30:61 Lease:0x663905e7}
	I0505 14:27:25.735798   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x663943e8}
	I0505 14:27:27.736075   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Attempt 2
	I0505 14:27:27.736092   56490 main.go:141] libmachine: (ha-671000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:27:27.736179   56490 main.go:141] libmachine: (ha-671000-m05) DBG | hyperkit pid from json: 56500
	I0505 14:27:27.737078   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Searching for f6:52:28:c9:d6:ef in /var/db/dhcpd_leases ...
	I0505 14:27:27.737157   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0505 14:27:27.737168   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x66394b0c}
	I0505 14:27:27.737186   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394aea}
	I0505 14:27:27.737195   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x66394ad8}
	I0505 14:27:27.737232   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x66394a07}
	I0505 14:27:27.737246   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:52:c5:3c:9d:14:e ID:1,52:c5:3c:9d:14:e Lease:0x663946c7}
	I0505 14:27:27.737258   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:e2:e0:ed:33:dd:5e ID:1,e2:e0:ed:33:dd:5e Lease:0x6637f4a5}
	I0505 14:27:27.737266   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:36:8b:71:f2:1a:8e ID:1,36:8b:71:f2:1a:8e Lease:0x66394428}
	I0505 14:27:27.737272   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:aa:f4:65:9c:ae:46 ID:1,aa:f4:65:9c:ae:46 Lease:0x66391eb5}
	I0505 14:27:27.737281   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:fe:3a:ee:70:a1:f4 ID:1,fe:3a:ee:70:a1:f4 Lease:0x66391e3b}
	I0505 14:27:27.737290   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:a6:ae:ed:1a:27:3 ID:1,a6:ae:ed:1a:27:3 Lease:0x66391d28}
	I0505 14:27:27.737298   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:7e:2a:b5:61:3a:bd ID:1,7e:2a:b5:61:3a:bd Lease:0x66391c4f}
	I0505 14:27:27.737307   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:d6:14:fe:95:d1:bb ID:1,d6:14:fe:95:d1:bb Lease:0x66391c1d}
	I0505 14:27:27.737314   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:2a:be:b1:4e:ea:a ID:1,2a:be:b1:4e:ea:a Lease:0x66391b62}
	I0505 14:27:27.737321   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:da:9f:e2:ec:3c:9b ID:1,da:9f:e2:ec:3c:9b Lease:0x66391b0a}
	I0505 14:27:27.737327   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:d6:4e:a2:a0:55:dc ID:1,d6:4e:a2:a0:55:dc Lease:0x66391af3}
	I0505 14:27:27.737334   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:a6:e3:ff:16:85:55 ID:1,a6:e3:ff:16:85:55 Lease:0x66391a30}
	I0505 14:27:27.737344   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:7a:f6:c5:4:75:e5 ID:1,7a:f6:c5:4:75:e5 Lease:0x66391a1f}
	I0505 14:27:27.737351   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:8e:6b:8b:ff:c9:7 ID:1,8e:6b:8b:ff:c9:7 Lease:0x663919de}
	I0505 14:27:27.737359   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:2:e2:2:6b:6b:57 ID:1,2:e2:2:6b:6b:57 Lease:0x663919c2}
	I0505 14:27:27.737372   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:5a:ae:9c:a1:2c:f3 ID:1,5a:ae:9c:a1:2c:f3 Lease:0x66391973}
	I0505 14:27:27.737390   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:d2:88:f3:9:fe:cd ID:1,d2:88:f3:9:fe:cd Lease:0x66391947}
	I0505 14:27:27.737401   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:66:50:86:0:9c:7f ID:1,66:50:86:0:9c:7f Lease:0x6637c7bc}
	I0505 14:27:27.737411   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:3a:9c:ad:a2:3:6a ID:1,3a:9c:ad:a2:3:6a Lease:0x6637c78e}
	I0505 14:27:27.737418   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:12:b5:58:55:5e:71 ID:1,12:b5:58:55:5e:71 Lease:0x663918d9}
	I0505 14:27:27.737426   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:76:18:d8:55:2a:95 ID:1,76:18:d8:55:2a:95 Lease:0x663918bb}
	I0505 14:27:27.737433   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:82:1f:3c:c0:72:91 ID:1,82:1f:3c:c0:72:91 Lease:0x66391897}
	I0505 14:27:27.737440   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ca:d8:83:72:cb:c4 ID:1,ca:d8:83:72:cb:c4 Lease:0x6639183f}
	I0505 14:27:27.737447   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:32:fd:f9:e7:8:bb ID:1,32:fd:f9:e7:8:bb Lease:0x66391784}
	I0505 14:27:27.737455   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ae:83:a6:bf:23:4b ID:1,ae:83:a6:bf:23:4b Lease:0x66391760}
	I0505 14:27:27.737462   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:82:cc:83:cb:6c:47 ID:1,82:cc:83:cb:6c:47 Lease:0x66391734}
	I0505 14:27:27.737469   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:4a:af:1b:be:ef:ad ID:1,4a:af:1b:be:ef:ad Lease:0x6639170a}
	I0505 14:27:27.737476   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:e6:9b:2e:f9:86:de ID:1,e6:9b:2e:f9:86:de Lease:0x6637c5f9}
	I0505 14:27:27.737493   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:46:a7:e6:29:52:8c ID:1,46:a7:e6:29:52:8c Lease:0x663916cd}
	I0505 14:27:27.737500   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:b2:82:eb:21:cf:b7 ID:1,b2:82:eb:21:cf:b7 Lease:0x6639165f}
	I0505 14:27:27.737507   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:52:d3:86:65:d6:a9 ID:1,52:d3:86:65:d6:a9 Lease:0x663915f0}
	I0505 14:27:27.737514   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:7e:b9:b1:4a:c5:24 ID:1,7e:b9:b1:4a:c5:24 Lease:0x663915c3}
	I0505 14:27:27.737521   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:a:24:1:17:3a:66 ID:1,a:24:1:17:3a:66 Lease:0x6637c34e}
	I0505 14:27:27.737529   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:c5:38:c6:61:9a ID:1,9e:c5:38:c6:61:9a Lease:0x66391514}
	I0505 14:27:27.737537   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:8a:2f:87:5c:8d:d4 ID:1,8a:2f:87:5c:8d:d4 Lease:0x663914ea}
	I0505 14:27:27.737546   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9e:54:9b:5b:ae:fb ID:1,9e:54:9b:5b:ae:fb Lease:0x66390dcf}
	I0505 14:27:27.737554   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:12:1b:d2:24:1f:f6 ID:1,12:1b:d2:24:1f:f6 Lease:0x6637bc3f}
	I0505 14:27:27.737561   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f6:e7:d9:1:6b:86 ID:1,f6:e7:d9:1:6b:86 Lease:0x66390d76}
	I0505 14:27:27.737566   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:d6:bd:9c:d7:d7 ID:1,d6:d6:bd:9c:d7:d7 Lease:0x66390cbd}
	I0505 14:27:27.737579   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:b2:a5:b1:29:d1:85 ID:1,b2:a5:b1:29:d1:85 Lease:0x6637bb33}
	I0505 14:27:27.737585   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:36:67:2:e8:f4:c1 ID:1,36:67:2:e8:f4:c1 Lease:0x66390c34}
	I0505 14:27:27.737590   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:32:7a:24:40:4c:e9 ID:1,32:7a:24:40:4c:e9 Lease:0x66390c1a}
	I0505 14:27:27.737597   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:8e:b1:47:5c:8f:bb ID:1,8e:b1:47:5c:8f:bb Lease:0x6637ba42}
	I0505 14:27:27.737603   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:42:38:dd:54:2a:ca ID:1,42:38:dd:54:2a:ca Lease:0x66390bf8}
	I0505 14:27:27.737611   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b6:c9:cf:d:ee:b0 ID:1,b6:c9:cf:d:ee:b0 Lease:0x66390be6}
	I0505 14:27:27.737617   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4e:81:5b:49:bb:17 ID:1,4e:81:5b:49:bb:17 Lease:0x663907ac}
	I0505 14:27:27.737625   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:1e:72:bc:80:53:85 ID:1,1e:72:bc:80:53:85 Lease:0x6637b58a}
	I0505 14:27:27.737632   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:4a:8e:d2:30:61 ID:1,de:4a:8e:d2:30:61 Lease:0x663905e7}
	I0505 14:27:27.737639   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x663943e8}
	I0505 14:27:29.561119   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:29 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0505 14:27:29.561191   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:29 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0505 14:27:29.561202   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:29 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0505 14:27:29.584580   56490 main.go:141] libmachine: (ha-671000-m05) DBG | 2024/05/05 14:27:29 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0505 14:27:29.737526   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Attempt 3
	I0505 14:27:29.737558   56490 main.go:141] libmachine: (ha-671000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:27:29.737741   56490 main.go:141] libmachine: (ha-671000-m05) DBG | hyperkit pid from json: 56500
	I0505 14:27:29.739052   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Searching for f6:52:28:c9:d6:ef in /var/db/dhcpd_leases ...
	I0505 14:27:29.739168   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0505 14:27:29.739186   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x66394b0c}
	I0505 14:27:29.739199   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394aea}
	I0505 14:27:29.739209   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x66394ad8}
	I0505 14:27:29.739217   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x66394a07}
	I0505 14:27:29.739231   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:52:c5:3c:9d:14:e ID:1,52:c5:3c:9d:14:e Lease:0x663946c7}
	I0505 14:27:29.739274   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:e2:e0:ed:33:dd:5e ID:1,e2:e0:ed:33:dd:5e Lease:0x6637f4a5}
	I0505 14:27:29.739289   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:36:8b:71:f2:1a:8e ID:1,36:8b:71:f2:1a:8e Lease:0x66394428}
	I0505 14:27:29.739316   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:aa:f4:65:9c:ae:46 ID:1,aa:f4:65:9c:ae:46 Lease:0x66391eb5}
	I0505 14:27:29.739331   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:fe:3a:ee:70:a1:f4 ID:1,fe:3a:ee:70:a1:f4 Lease:0x66391e3b}
	I0505 14:27:29.739355   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:a6:ae:ed:1a:27:3 ID:1,a6:ae:ed:1a:27:3 Lease:0x66391d28}
	I0505 14:27:29.739387   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:7e:2a:b5:61:3a:bd ID:1,7e:2a:b5:61:3a:bd Lease:0x66391c4f}
	I0505 14:27:29.739407   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:d6:14:fe:95:d1:bb ID:1,d6:14:fe:95:d1:bb Lease:0x66391c1d}
	I0505 14:27:29.739422   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:2a:be:b1:4e:ea:a ID:1,2a:be:b1:4e:ea:a Lease:0x66391b62}
	I0505 14:27:29.739433   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:da:9f:e2:ec:3c:9b ID:1,da:9f:e2:ec:3c:9b Lease:0x66391b0a}
	I0505 14:27:29.739443   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:d6:4e:a2:a0:55:dc ID:1,d6:4e:a2:a0:55:dc Lease:0x66391af3}
	I0505 14:27:29.739453   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:a6:e3:ff:16:85:55 ID:1,a6:e3:ff:16:85:55 Lease:0x66391a30}
	I0505 14:27:29.739486   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:7a:f6:c5:4:75:e5 ID:1,7a:f6:c5:4:75:e5 Lease:0x66391a1f}
	I0505 14:27:29.739503   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:8e:6b:8b:ff:c9:7 ID:1,8e:6b:8b:ff:c9:7 Lease:0x663919de}
	I0505 14:27:29.739521   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:2:e2:2:6b:6b:57 ID:1,2:e2:2:6b:6b:57 Lease:0x663919c2}
	I0505 14:27:29.739539   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:5a:ae:9c:a1:2c:f3 ID:1,5a:ae:9c:a1:2c:f3 Lease:0x66391973}
	I0505 14:27:29.739552   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:d2:88:f3:9:fe:cd ID:1,d2:88:f3:9:fe:cd Lease:0x66391947}
	I0505 14:27:29.739580   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:66:50:86:0:9c:7f ID:1,66:50:86:0:9c:7f Lease:0x6637c7bc}
	I0505 14:27:29.739603   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:3a:9c:ad:a2:3:6a ID:1,3a:9c:ad:a2:3:6a Lease:0x6637c78e}
	I0505 14:27:29.739627   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:12:b5:58:55:5e:71 ID:1,12:b5:58:55:5e:71 Lease:0x663918d9}
	I0505 14:27:29.739655   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:76:18:d8:55:2a:95 ID:1,76:18:d8:55:2a:95 Lease:0x663918bb}
	I0505 14:27:29.739668   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:82:1f:3c:c0:72:91 ID:1,82:1f:3c:c0:72:91 Lease:0x66391897}
	I0505 14:27:29.739684   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ca:d8:83:72:cb:c4 ID:1,ca:d8:83:72:cb:c4 Lease:0x6639183f}
	I0505 14:27:29.739701   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:32:fd:f9:e7:8:bb ID:1,32:fd:f9:e7:8:bb Lease:0x66391784}
	I0505 14:27:29.739711   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ae:83:a6:bf:23:4b ID:1,ae:83:a6:bf:23:4b Lease:0x66391760}
	I0505 14:27:29.739721   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:82:cc:83:cb:6c:47 ID:1,82:cc:83:cb:6c:47 Lease:0x66391734}
	I0505 14:27:29.739740   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:4a:af:1b:be:ef:ad ID:1,4a:af:1b:be:ef:ad Lease:0x6639170a}
	I0505 14:27:29.739756   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:e6:9b:2e:f9:86:de ID:1,e6:9b:2e:f9:86:de Lease:0x6637c5f9}
	I0505 14:27:29.739766   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:46:a7:e6:29:52:8c ID:1,46:a7:e6:29:52:8c Lease:0x663916cd}
	I0505 14:27:29.739777   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:b2:82:eb:21:cf:b7 ID:1,b2:82:eb:21:cf:b7 Lease:0x6639165f}
	I0505 14:27:29.739786   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:52:d3:86:65:d6:a9 ID:1,52:d3:86:65:d6:a9 Lease:0x663915f0}
	I0505 14:27:29.739796   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:7e:b9:b1:4a:c5:24 ID:1,7e:b9:b1:4a:c5:24 Lease:0x663915c3}
	I0505 14:27:29.739806   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:a:24:1:17:3a:66 ID:1,a:24:1:17:3a:66 Lease:0x6637c34e}
	I0505 14:27:29.739816   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:c5:38:c6:61:9a ID:1,9e:c5:38:c6:61:9a Lease:0x66391514}
	I0505 14:27:29.739825   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:8a:2f:87:5c:8d:d4 ID:1,8a:2f:87:5c:8d:d4 Lease:0x663914ea}
	I0505 14:27:29.739836   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9e:54:9b:5b:ae:fb ID:1,9e:54:9b:5b:ae:fb Lease:0x66390dcf}
	I0505 14:27:29.739845   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:12:1b:d2:24:1f:f6 ID:1,12:1b:d2:24:1f:f6 Lease:0x6637bc3f}
	I0505 14:27:29.739857   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f6:e7:d9:1:6b:86 ID:1,f6:e7:d9:1:6b:86 Lease:0x66390d76}
	I0505 14:27:29.739876   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:d6:bd:9c:d7:d7 ID:1,d6:d6:bd:9c:d7:d7 Lease:0x66390cbd}
	I0505 14:27:29.739893   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:b2:a5:b1:29:d1:85 ID:1,b2:a5:b1:29:d1:85 Lease:0x6637bb33}
	I0505 14:27:29.739906   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:36:67:2:e8:f4:c1 ID:1,36:67:2:e8:f4:c1 Lease:0x66390c34}
	I0505 14:27:29.739916   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:32:7a:24:40:4c:e9 ID:1,32:7a:24:40:4c:e9 Lease:0x66390c1a}
	I0505 14:27:29.739925   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:8e:b1:47:5c:8f:bb ID:1,8e:b1:47:5c:8f:bb Lease:0x6637ba42}
	I0505 14:27:29.739934   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:42:38:dd:54:2a:ca ID:1,42:38:dd:54:2a:ca Lease:0x66390bf8}
	I0505 14:27:29.739948   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b6:c9:cf:d:ee:b0 ID:1,b6:c9:cf:d:ee:b0 Lease:0x66390be6}
	I0505 14:27:29.739966   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4e:81:5b:49:bb:17 ID:1,4e:81:5b:49:bb:17 Lease:0x663907ac}
	I0505 14:27:29.739979   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:1e:72:bc:80:53:85 ID:1,1e:72:bc:80:53:85 Lease:0x6637b58a}
	I0505 14:27:29.739991   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:4a:8e:d2:30:61 ID:1,de:4a:8e:d2:30:61 Lease:0x663905e7}
	I0505 14:27:29.740002   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x663943e8}
	I0505 14:27:31.739617   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Attempt 4
	I0505 14:27:31.739637   56490 main.go:141] libmachine: (ha-671000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:27:31.739720   56490 main.go:141] libmachine: (ha-671000-m05) DBG | hyperkit pid from json: 56500
	I0505 14:27:31.740514   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Searching for f6:52:28:c9:d6:ef in /var/db/dhcpd_leases ...
	I0505 14:27:31.740606   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0505 14:27:31.740617   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x66394b0c}
	I0505 14:27:31.740626   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394aea}
	I0505 14:27:31.740632   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x66394ad8}
	I0505 14:27:31.740639   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x66394a07}
	I0505 14:27:31.740645   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:52:c5:3c:9d:14:e ID:1,52:c5:3c:9d:14:e Lease:0x663946c7}
	I0505 14:27:31.740665   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:e2:e0:ed:33:dd:5e ID:1,e2:e0:ed:33:dd:5e Lease:0x6637f4a5}
	I0505 14:27:31.740676   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:36:8b:71:f2:1a:8e ID:1,36:8b:71:f2:1a:8e Lease:0x66394428}
	I0505 14:27:31.740683   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:aa:f4:65:9c:ae:46 ID:1,aa:f4:65:9c:ae:46 Lease:0x66391eb5}
	I0505 14:27:31.740700   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:fe:3a:ee:70:a1:f4 ID:1,fe:3a:ee:70:a1:f4 Lease:0x66391e3b}
	I0505 14:27:31.740717   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:a6:ae:ed:1a:27:3 ID:1,a6:ae:ed:1a:27:3 Lease:0x66391d28}
	I0505 14:27:31.740727   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:7e:2a:b5:61:3a:bd ID:1,7e:2a:b5:61:3a:bd Lease:0x66391c4f}
	I0505 14:27:31.740744   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:d6:14:fe:95:d1:bb ID:1,d6:14:fe:95:d1:bb Lease:0x66391c1d}
	I0505 14:27:31.740752   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:2a:be:b1:4e:ea:a ID:1,2a:be:b1:4e:ea:a Lease:0x66391b62}
	I0505 14:27:31.740763   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:da:9f:e2:ec:3c:9b ID:1,da:9f:e2:ec:3c:9b Lease:0x66391b0a}
	I0505 14:27:31.740770   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:d6:4e:a2:a0:55:dc ID:1,d6:4e:a2:a0:55:dc Lease:0x66391af3}
	I0505 14:27:31.740780   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:a6:e3:ff:16:85:55 ID:1,a6:e3:ff:16:85:55 Lease:0x66391a30}
	I0505 14:27:31.740788   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:7a:f6:c5:4:75:e5 ID:1,7a:f6:c5:4:75:e5 Lease:0x66391a1f}
	I0505 14:27:31.740798   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:8e:6b:8b:ff:c9:7 ID:1,8e:6b:8b:ff:c9:7 Lease:0x663919de}
	I0505 14:27:31.740806   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:2:e2:2:6b:6b:57 ID:1,2:e2:2:6b:6b:57 Lease:0x663919c2}
	I0505 14:27:31.740813   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:5a:ae:9c:a1:2c:f3 ID:1,5a:ae:9c:a1:2c:f3 Lease:0x66391973}
	I0505 14:27:31.740820   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:d2:88:f3:9:fe:cd ID:1,d2:88:f3:9:fe:cd Lease:0x66391947}
	I0505 14:27:31.740827   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:66:50:86:0:9c:7f ID:1,66:50:86:0:9c:7f Lease:0x6637c7bc}
	I0505 14:27:31.740834   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:3a:9c:ad:a2:3:6a ID:1,3a:9c:ad:a2:3:6a Lease:0x6637c78e}
	I0505 14:27:31.740841   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:12:b5:58:55:5e:71 ID:1,12:b5:58:55:5e:71 Lease:0x663918d9}
	I0505 14:27:31.740847   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:76:18:d8:55:2a:95 ID:1,76:18:d8:55:2a:95 Lease:0x663918bb}
	I0505 14:27:31.740854   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:82:1f:3c:c0:72:91 ID:1,82:1f:3c:c0:72:91 Lease:0x66391897}
	I0505 14:27:31.740860   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ca:d8:83:72:cb:c4 ID:1,ca:d8:83:72:cb:c4 Lease:0x6639183f}
	I0505 14:27:31.740866   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:32:fd:f9:e7:8:bb ID:1,32:fd:f9:e7:8:bb Lease:0x66391784}
	I0505 14:27:31.740874   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ae:83:a6:bf:23:4b ID:1,ae:83:a6:bf:23:4b Lease:0x66391760}
	I0505 14:27:31.740886   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:82:cc:83:cb:6c:47 ID:1,82:cc:83:cb:6c:47 Lease:0x66391734}
	I0505 14:27:31.740898   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:4a:af:1b:be:ef:ad ID:1,4a:af:1b:be:ef:ad Lease:0x6639170a}
	I0505 14:27:31.740907   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:e6:9b:2e:f9:86:de ID:1,e6:9b:2e:f9:86:de Lease:0x6637c5f9}
	I0505 14:27:31.740915   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:46:a7:e6:29:52:8c ID:1,46:a7:e6:29:52:8c Lease:0x663916cd}
	I0505 14:27:31.740922   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:b2:82:eb:21:cf:b7 ID:1,b2:82:eb:21:cf:b7 Lease:0x6639165f}
	I0505 14:27:31.740928   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:52:d3:86:65:d6:a9 ID:1,52:d3:86:65:d6:a9 Lease:0x663915f0}
	I0505 14:27:31.740946   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:7e:b9:b1:4a:c5:24 ID:1,7e:b9:b1:4a:c5:24 Lease:0x663915c3}
	I0505 14:27:31.740955   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:a:24:1:17:3a:66 ID:1,a:24:1:17:3a:66 Lease:0x6637c34e}
	I0505 14:27:31.740964   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:c5:38:c6:61:9a ID:1,9e:c5:38:c6:61:9a Lease:0x66391514}
	I0505 14:27:31.740972   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:8a:2f:87:5c:8d:d4 ID:1,8a:2f:87:5c:8d:d4 Lease:0x663914ea}
	I0505 14:27:31.740979   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9e:54:9b:5b:ae:fb ID:1,9e:54:9b:5b:ae:fb Lease:0x66390dcf}
	I0505 14:27:31.740986   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:12:1b:d2:24:1f:f6 ID:1,12:1b:d2:24:1f:f6 Lease:0x6637bc3f}
	I0505 14:27:31.741001   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f6:e7:d9:1:6b:86 ID:1,f6:e7:d9:1:6b:86 Lease:0x66390d76}
	I0505 14:27:31.741015   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:d6:bd:9c:d7:d7 ID:1,d6:d6:bd:9c:d7:d7 Lease:0x66390cbd}
	I0505 14:27:31.741023   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:b2:a5:b1:29:d1:85 ID:1,b2:a5:b1:29:d1:85 Lease:0x6637bb33}
	I0505 14:27:31.741035   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:36:67:2:e8:f4:c1 ID:1,36:67:2:e8:f4:c1 Lease:0x66390c34}
	I0505 14:27:31.741045   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:32:7a:24:40:4c:e9 ID:1,32:7a:24:40:4c:e9 Lease:0x66390c1a}
	I0505 14:27:31.741052   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:8e:b1:47:5c:8f:bb ID:1,8e:b1:47:5c:8f:bb Lease:0x6637ba42}
	I0505 14:27:31.741059   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:42:38:dd:54:2a:ca ID:1,42:38:dd:54:2a:ca Lease:0x66390bf8}
	I0505 14:27:31.741066   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b6:c9:cf:d:ee:b0 ID:1,b6:c9:cf:d:ee:b0 Lease:0x66390be6}
	I0505 14:27:31.741073   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4e:81:5b:49:bb:17 ID:1,4e:81:5b:49:bb:17 Lease:0x663907ac}
	I0505 14:27:31.741082   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:1e:72:bc:80:53:85 ID:1,1e:72:bc:80:53:85 Lease:0x6637b58a}
	I0505 14:27:31.741091   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:4a:8e:d2:30:61 ID:1,de:4a:8e:d2:30:61 Lease:0x663905e7}
	I0505 14:27:31.741098   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x663943e8}
	I0505 14:27:33.742113   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Attempt 5
	I0505 14:27:33.742127   56490 main.go:141] libmachine: (ha-671000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:27:33.742222   56490 main.go:141] libmachine: (ha-671000-m05) DBG | hyperkit pid from json: 56500
	I0505 14:27:33.743021   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Searching for f6:52:28:c9:d6:ef in /var/db/dhcpd_leases ...
	I0505 14:27:33.743126   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Found 54 entries in /var/db/dhcpd_leases!
	I0505 14:27:33.743151   56490 main.go:141] libmachine: (ha-671000-m05) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.55 HWAddress:f6:52:28:c9:d6:ef ID:1,f6:52:28:c9:d6:ef Lease:0x66394b44}
	I0505 14:27:33.743163   56490 main.go:141] libmachine: (ha-671000-m05) DBG | Found match: f6:52:28:c9:d6:ef
	I0505 14:27:33.743176   56490 main.go:141] libmachine: (ha-671000-m05) DBG | IP: 192.169.0.55
	I0505 14:27:33.743215   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetConfigRaw
	I0505 14:27:33.743830   56490 main.go:141] libmachine: (ha-671000-m05) Calling .DriverName
	I0505 14:27:33.743931   56490 main.go:141] libmachine: (ha-671000-m05) Calling .DriverName
	I0505 14:27:33.744023   56490 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0505 14:27:33.744031   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetState
	I0505 14:27:33.744105   56490 main.go:141] libmachine: (ha-671000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:27:33.744173   56490 main.go:141] libmachine: (ha-671000-m05) DBG | hyperkit pid from json: 56500
	I0505 14:27:33.744992   56490 main.go:141] libmachine: Detecting operating system of created instance...
	I0505 14:27:33.745003   56490 main.go:141] libmachine: Waiting for SSH to be available...
	I0505 14:27:33.745010   56490 main.go:141] libmachine: Getting to WaitForSSH function...
	I0505 14:27:33.745015   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHHostname
	I0505 14:27:33.745100   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHPort
	I0505 14:27:33.745182   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:33.745261   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:33.745335   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHUsername
	I0505 14:27:33.745440   56490 main.go:141] libmachine: Using SSH client type: native
	I0505 14:27:33.745670   56490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x48ffb80] 0x49028e0 <nil>  [] 0s} 192.169.0.55 22 <nil> <nil>}
	I0505 14:27:33.745678   56490 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0505 14:27:33.764067   56490 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0505 14:27:36.819535   56490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 14:27:36.819548   56490 main.go:141] libmachine: Detecting the provisioner...
	I0505 14:27:36.819554   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHHostname
	I0505 14:27:36.819682   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHPort
	I0505 14:27:36.819783   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:36.819879   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:36.819968   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHUsername
	I0505 14:27:36.820099   56490 main.go:141] libmachine: Using SSH client type: native
	I0505 14:27:36.820254   56490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x48ffb80] 0x49028e0 <nil>  [] 0s} 192.169.0.55 22 <nil> <nil>}
	I0505 14:27:36.820266   56490 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0505 14:27:36.874376   56490 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0505 14:27:36.874438   56490 main.go:141] libmachine: found compatible host: buildroot
	I0505 14:27:36.874445   56490 main.go:141] libmachine: Provisioning with buildroot...
	I0505 14:27:36.874450   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetMachineName
	I0505 14:27:36.874599   56490 buildroot.go:166] provisioning hostname "ha-671000-m05"
	I0505 14:27:36.874608   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetMachineName
	I0505 14:27:36.874693   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHHostname
	I0505 14:27:36.874779   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHPort
	I0505 14:27:36.874872   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:36.874957   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:36.875052   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHUsername
	I0505 14:27:36.875178   56490 main.go:141] libmachine: Using SSH client type: native
	I0505 14:27:36.875311   56490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x48ffb80] 0x49028e0 <nil>  [] 0s} 192.169.0.55 22 <nil> <nil>}
	I0505 14:27:36.875319   56490 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671000-m05 && echo "ha-671000-m05" | sudo tee /etc/hostname
	I0505 14:27:36.940451   56490 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000-m05
	
	I0505 14:27:36.940483   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHHostname
	I0505 14:27:36.940616   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHPort
	I0505 14:27:36.940707   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:36.940793   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:36.940877   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHUsername
	I0505 14:27:36.941015   56490 main.go:141] libmachine: Using SSH client type: native
	I0505 14:27:36.941160   56490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x48ffb80] 0x49028e0 <nil>  [] 0s} 192.169.0.55 22 <nil> <nil>}
	I0505 14:27:36.941171   56490 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671000-m05' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000-m05/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671000-m05' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 14:27:37.001624   56490 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 14:27:37.001647   56490 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
	I0505 14:27:37.001667   56490 buildroot.go:174] setting up certificates
	I0505 14:27:37.001673   56490 provision.go:84] configureAuth start
	I0505 14:27:37.001680   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetMachineName
	I0505 14:27:37.001820   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetIP
	I0505 14:27:37.001930   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHHostname
	I0505 14:27:37.002018   56490 provision.go:143] copyHostCerts
	I0505 14:27:37.002051   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:27:37.002121   56490 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
	I0505 14:27:37.002128   56490 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:27:37.002275   56490 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
	I0505 14:27:37.002494   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:27:37.002534   56490 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
	I0505 14:27:37.002544   56490 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:27:37.002625   56490 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
	I0505 14:27:37.002783   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:27:37.002822   56490 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
	I0505 14:27:37.002827   56490 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:27:37.002901   56490 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
	I0505 14:27:37.003064   56490 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000-m05 san=[127.0.0.1 192.169.0.55 ha-671000-m05 localhost minikube]
	I0505 14:27:37.153347   56490 provision.go:177] copyRemoteCerts
	I0505 14:27:37.153397   56490 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 14:27:37.153412   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHHostname
	I0505 14:27:37.153564   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHPort
	I0505 14:27:37.153665   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:37.153751   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHUsername
	I0505 14:27:37.153889   56490 sshutil.go:53] new ssh client: &{IP:192.169.0.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/id_rsa Username:docker}
	I0505 14:27:37.187892   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 14:27:37.187974   56490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0505 14:27:37.207937   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 14:27:37.208018   56490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 14:27:37.227985   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 14:27:37.228059   56490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 14:27:37.248331   56490 provision.go:87] duration metric: took 246.651039ms to configureAuth
	I0505 14:27:37.248345   56490 buildroot.go:189] setting minikube options for container-runtime
	I0505 14:27:37.248548   56490 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:27:37.248561   56490 main.go:141] libmachine: (ha-671000-m05) Calling .DriverName
	I0505 14:27:37.248699   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHHostname
	I0505 14:27:37.248775   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHPort
	I0505 14:27:37.248867   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:37.248953   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:37.249030   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHUsername
	I0505 14:27:37.249139   56490 main.go:141] libmachine: Using SSH client type: native
	I0505 14:27:37.249257   56490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x48ffb80] 0x49028e0 <nil>  [] 0s} 192.169.0.55 22 <nil> <nil>}
	I0505 14:27:37.249264   56490 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 14:27:37.305081   56490 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 14:27:37.305097   56490 buildroot.go:70] root file system type: tmpfs
	I0505 14:27:37.305185   56490 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 14:27:37.305199   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHHostname
	I0505 14:27:37.305325   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHPort
	I0505 14:27:37.305423   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:37.305516   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:37.305605   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHUsername
	I0505 14:27:37.305726   56490 main.go:141] libmachine: Using SSH client type: native
	I0505 14:27:37.305872   56490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x48ffb80] 0x49028e0 <nil>  [] 0s} 192.169.0.55 22 <nil> <nil>}
	I0505 14:27:37.305917   56490 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 14:27:37.373131   56490 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 14:27:37.373153   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHHostname
	I0505 14:27:37.373291   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHPort
	I0505 14:27:37.373384   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:37.373483   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:37.373563   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHUsername
	I0505 14:27:37.373699   56490 main.go:141] libmachine: Using SSH client type: native
	I0505 14:27:37.373844   56490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x48ffb80] 0x49028e0 <nil>  [] 0s} 192.169.0.55 22 <nil> <nil>}
	I0505 14:27:37.373859   56490 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 14:27:38.897887   56490 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0505 14:27:38.897904   56490 main.go:141] libmachine: Checking connection to Docker...
	I0505 14:27:38.897910   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetURL
	I0505 14:27:38.898049   56490 main.go:141] libmachine: Docker is up and running!
	I0505 14:27:38.898055   56490 main.go:141] libmachine: Reticulating splines...
	I0505 14:27:38.898060   56490 client.go:171] duration metric: took 16.157290701s to LocalClient.Create
	I0505 14:27:38.898073   56490 start.go:167] duration metric: took 16.157333267s to libmachine.API.Create "ha-671000"
	I0505 14:27:38.898078   56490 start.go:293] postStartSetup for "ha-671000-m05" (driver="hyperkit")
	I0505 14:27:38.898092   56490 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 14:27:38.898101   56490 main.go:141] libmachine: (ha-671000-m05) Calling .DriverName
	I0505 14:27:38.898243   56490 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 14:27:38.898259   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHHostname
	I0505 14:27:38.898352   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHPort
	I0505 14:27:38.898431   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:38.898510   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHUsername
	I0505 14:27:38.898594   56490 sshutil.go:53] new ssh client: &{IP:192.169.0.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/id_rsa Username:docker}
	I0505 14:27:38.932216   56490 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 14:27:38.935354   56490 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 14:27:38.935367   56490 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
	I0505 14:27:38.935481   56490 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
	I0505 14:27:38.935685   56490 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
	I0505 14:27:38.935692   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
	I0505 14:27:38.935895   56490 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 14:27:38.942955   56490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:27:38.963822   56490 start.go:296] duration metric: took 65.736919ms for postStartSetup
	I0505 14:27:38.963855   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetConfigRaw
	I0505 14:27:38.964503   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetIP
	I0505 14:27:38.964661   56490 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:27:38.964994   56490 start.go:128] duration metric: took 16.255235255s to createHost
	I0505 14:27:38.965007   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHHostname
	I0505 14:27:38.965109   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHPort
	I0505 14:27:38.965208   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:38.965290   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:38.965374   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHUsername
	I0505 14:27:38.965483   56490 main.go:141] libmachine: Using SSH client type: native
	I0505 14:27:38.965608   56490 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x48ffb80] 0x49028e0 <nil>  [] 0s} 192.169.0.55 22 <nil> <nil>}
	I0505 14:27:38.965616   56490 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0505 14:27:39.020398   56490 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944459.017827082
	
	I0505 14:27:39.020411   56490 fix.go:216] guest clock: 1714944459.017827082
	I0505 14:27:39.020424   56490 fix.go:229] Guest: 2024-05-05 14:27:39.017827082 -0700 PDT Remote: 2024-05-05 14:27:38.965002 -0700 PDT m=+16.482863892 (delta=52.825082ms)
	I0505 14:27:39.020444   56490 fix.go:200] guest clock delta is within tolerance: 52.825082ms
	I0505 14:27:39.020449   56490 start.go:83] releasing machines lock for "ha-671000-m05", held for 16.310825897s
	I0505 14:27:39.020470   56490 main.go:141] libmachine: (ha-671000-m05) Calling .DriverName
	I0505 14:27:39.020606   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetIP
	I0505 14:27:39.020711   56490 main.go:141] libmachine: (ha-671000-m05) Calling .DriverName
	I0505 14:27:39.021034   56490 main.go:141] libmachine: (ha-671000-m05) Calling .DriverName
	I0505 14:27:39.021147   56490 main.go:141] libmachine: (ha-671000-m05) Calling .DriverName
	I0505 14:27:39.021252   56490 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 14:27:39.021284   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHHostname
	I0505 14:27:39.021378   56490 ssh_runner.go:195] Run: systemctl --version
	I0505 14:27:39.021391   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHHostname
	I0505 14:27:39.021407   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHPort
	I0505 14:27:39.021521   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:39.021565   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHPort
	I0505 14:27:39.021644   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHUsername
	I0505 14:27:39.021707   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHKeyPath
	I0505 14:27:39.021779   56490 sshutil.go:53] new ssh client: &{IP:192.169.0.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/id_rsa Username:docker}
	I0505 14:27:39.021832   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetSSHUsername
	I0505 14:27:39.021924   56490 sshutil.go:53] new ssh client: &{IP:192.169.0.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m05/id_rsa Username:docker}
	I0505 14:27:39.052588   56490 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 14:27:39.102412   56490 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 14:27:39.102472   56490 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 14:27:39.114793   56490 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 14:27:39.114811   56490 start.go:494] detecting cgroup driver to use...
	I0505 14:27:39.114908   56490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:27:39.129853   56490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0505 14:27:39.138129   56490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 14:27:39.146274   56490 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 14:27:39.146319   56490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 14:27:39.154606   56490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:27:39.163016   56490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 14:27:39.171362   56490 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:27:39.179868   56490 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 14:27:39.188618   56490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 14:27:39.197002   56490 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 14:27:39.205231   56490 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 14:27:39.213624   56490 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 14:27:39.221069   56490 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 14:27:39.228782   56490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:27:39.327389   56490 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 14:27:39.347023   56490 start.go:494] detecting cgroup driver to use...
	I0505 14:27:39.347107   56490 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 14:27:39.366011   56490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:27:39.377978   56490 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 14:27:39.403181   56490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:27:39.413594   56490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:27:39.423642   56490 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0505 14:27:39.445415   56490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:27:39.455788   56490 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:27:39.470790   56490 ssh_runner.go:195] Run: which cri-dockerd
	I0505 14:27:39.474404   56490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 14:27:39.481818   56490 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 14:27:39.495470   56490 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 14:27:39.589385   56490 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 14:27:39.691122   56490 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 14:27:39.691214   56490 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 14:27:39.705811   56490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:27:39.809856   56490 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:27:42.034711   56490 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.224856047s)
	I0505 14:27:42.034779   56490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0505 14:27:42.045645   56490 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0505 14:29:39.664422   56490 ssh_runner.go:235] Completed: sudo systemctl stop cri-docker.socket: (1m57.619870953s)
	I0505 14:29:39.664488   56490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:29:39.674976   56490 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0505 14:29:39.770850   56490 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0505 14:29:39.880705   56490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:29:39.973147   56490 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0505 14:29:39.986989   56490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:29:39.998280   56490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:29:40.093766   56490 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0505 14:29:40.150462   56490 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0505 14:29:40.150542   56490 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0505 14:29:40.154899   56490 start.go:562] Will wait 60s for crictl version
	I0505 14:29:40.154953   56490 ssh_runner.go:195] Run: which crictl
	I0505 14:29:40.158888   56490 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 14:29:40.187178   56490 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0505 14:29:40.187248   56490 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:29:40.204842   56490 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:29:40.243947   56490 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0505 14:29:40.244010   56490 main.go:141] libmachine: (ha-671000-m05) Calling .GetIP
	I0505 14:29:40.244291   56490 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0505 14:29:40.247683   56490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:29:40.257337   56490 mustload.go:65] Loading cluster: ha-671000
	I0505 14:29:40.257514   56490 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:29:40.257749   56490 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:29:40.257772   56490 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:29:40.266462   56490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58235
	I0505 14:29:40.266795   56490 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:29:40.267121   56490 main.go:141] libmachine: Using API Version  1
	I0505 14:29:40.267133   56490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:29:40.267334   56490 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:29:40.267430   56490 main.go:141] libmachine: (ha-671000) Calling .GetState
	I0505 14:29:40.267508   56490 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:29:40.267599   56490 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 56435
	I0505 14:29:40.268540   56490 host.go:66] Checking if "ha-671000" exists ...
	I0505 14:29:40.268781   56490 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:29:40.268804   56490 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:29:40.277581   56490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58237
	I0505 14:29:40.277914   56490 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:29:40.278301   56490 main.go:141] libmachine: Using API Version  1
	I0505 14:29:40.278320   56490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:29:40.278540   56490 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:29:40.278648   56490 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:29:40.278747   56490 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000 for IP: 192.169.0.55
	I0505 14:29:40.278754   56490 certs.go:194] generating shared ca certs ...
	I0505 14:29:40.278768   56490 certs.go:226] acquiring lock for ca certs: {Name:mk4a4c4cb11dfd06f304e9c6007de9e5e149a466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:29:40.278962   56490 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key
	I0505 14:29:40.279035   56490 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key
	I0505 14:29:40.279045   56490 certs.go:256] generating profile certs ...
	I0505 14:29:40.279171   56490 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key
	I0505 14:29:40.279194   56490 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.16af8e87
	I0505 14:29:40.279207   56490 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.16af8e87 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.51 192.169.0.52 192.169.0.55 192.169.0.254]
	I0505 14:29:40.404755   56490 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.16af8e87 ...
	I0505 14:29:40.404775   56490 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.16af8e87: {Name:mke0fe49c5b94418c2ed262a8ee978a3770a3ae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:29:40.405104   56490 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.16af8e87 ...
	I0505 14:29:40.405117   56490 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.16af8e87: {Name:mkcae2899a9f99c173335b870cedef6c894c4ead Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:29:40.405358   56490 certs.go:381] copying /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.16af8e87 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt
	I0505 14:29:40.406358   56490 certs.go:385] copying /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.16af8e87 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key
	I0505 14:29:40.406673   56490 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key
	I0505 14:29:40.406683   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 14:29:40.406709   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 14:29:40.406729   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 14:29:40.406747   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 14:29:40.406770   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 14:29:40.406790   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 14:29:40.406809   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 14:29:40.406827   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 14:29:40.406921   56490 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem (1338 bytes)
	W0505 14:29:40.406972   56490 certs.go:480] ignoring /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210_empty.pem, impossibly tiny 0 bytes
	I0505 14:29:40.406980   56490 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 14:29:40.407023   56490 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem (1078 bytes)
	I0505 14:29:40.407062   56490 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem (1123 bytes)
	I0505 14:29:40.407103   56490 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem (1679 bytes)
	I0505 14:29:40.407199   56490 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:29:40.407251   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /usr/share/ca-certificates/542102.pem
	I0505 14:29:40.407277   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:29:40.407295   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem -> /usr/share/ca-certificates/54210.pem
	I0505 14:29:40.407324   56490 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:29:40.407471   56490 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:29:40.407565   56490 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:29:40.407661   56490 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:29:40.407739   56490 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:29:40.435420   56490 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0505 14:29:40.439022   56490 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0505 14:29:40.447760   56490 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0505 14:29:40.451367   56490 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0505 14:29:40.459151   56490 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0505 14:29:40.462292   56490 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0505 14:29:40.470848   56490 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0505 14:29:40.474087   56490 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0505 14:29:40.481858   56490 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0505 14:29:40.486017   56490 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0505 14:29:40.494253   56490 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0505 14:29:40.497330   56490 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0505 14:29:40.505360   56490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 14:29:40.525597   56490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 14:29:40.545433   56490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 14:29:40.564831   56490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 14:29:40.584993   56490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0505 14:29:40.605422   56490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 14:29:40.625011   56490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 14:29:40.644765   56490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 14:29:40.664716   56490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /usr/share/ca-certificates/542102.pem (1708 bytes)
	I0505 14:29:40.684687   56490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 14:29:40.704934   56490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem --> /usr/share/ca-certificates/54210.pem (1338 bytes)
	I0505 14:29:40.725065   56490 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0505 14:29:40.738684   56490 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0505 14:29:40.752173   56490 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0505 14:29:40.765641   56490 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0505 14:29:40.779041   56490 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0505 14:29:40.792416   56490 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0505 14:29:40.808037   56490 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0505 14:29:40.821477   56490 ssh_runner.go:195] Run: openssl version
	I0505 14:29:40.825759   56490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 14:29:40.834093   56490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:29:40.837456   56490 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:29:40.837501   56490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:29:40.841713   56490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 14:29:40.850066   56490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54210.pem && ln -fs /usr/share/ca-certificates/54210.pem /etc/ssl/certs/54210.pem"
	I0505 14:29:40.858413   56490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54210.pem
	I0505 14:29:40.861861   56490 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:08 /usr/share/ca-certificates/54210.pem
	I0505 14:29:40.861899   56490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54210.pem
	I0505 14:29:40.866129   56490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/54210.pem /etc/ssl/certs/51391683.0"
	I0505 14:29:40.874554   56490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/542102.pem && ln -fs /usr/share/ca-certificates/542102.pem /etc/ssl/certs/542102.pem"
	I0505 14:29:40.883042   56490 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/542102.pem
	I0505 14:29:40.886416   56490 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:08 /usr/share/ca-certificates/542102.pem
	I0505 14:29:40.886457   56490 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/542102.pem
	I0505 14:29:40.890673   56490 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/542102.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 14:29:40.898988   56490 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 14:29:40.902212   56490 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0505 14:29:40.902261   56490 kubeadm.go:928] updating node {m05 192.169.0.55 8443 v1.30.0  true true} ...
	I0505 14:29:40.902346   56490 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-671000-m05 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 14:29:40.902370   56490 kube-vip.go:111] generating kube-vip config ...
	I0505 14:29:40.902409   56490 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 14:29:40.914621   56490 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 14:29:40.914700   56490 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 14:29:40.914757   56490 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 14:29:40.928057   56490 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0505 14:29:40.928132   56490 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0505 14:29:40.936909   56490 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0505 14:29:40.936934   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0505 14:29:40.936909   56490 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0505 14:29:40.936955   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0505 14:29:40.936909   56490 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0505 14:29:40.937013   56490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 14:29:40.937030   56490 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0505 14:29:40.937051   56490 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0505 14:29:40.951505   56490 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0505 14:29:40.951558   56490 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0505 14:29:40.951566   56490 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0505 14:29:40.951587   56490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0505 14:29:40.951589   56490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0505 14:29:40.951628   56490 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0505 14:29:40.970039   56490 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0505 14:29:40.970077   56490 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0505 14:29:41.514379   56490 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0505 14:29:41.522520   56490 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0505 14:29:41.536286   56490 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 14:29:41.549828   56490 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0505 14:29:41.563600   56490 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0505 14:29:41.566733   56490 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:29:41.577083   56490 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:29:41.683417   56490 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:29:41.702074   56490 host.go:66] Checking if "ha-671000" exists ...
	I0505 14:29:41.702363   56490 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:29:41.702386   56490 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:29:41.711171   56490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58240
	I0505 14:29:41.711617   56490 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:29:41.712033   56490 main.go:141] libmachine: Using API Version  1
	I0505 14:29:41.712054   56490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:29:41.712355   56490 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:29:41.712479   56490 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:29:41.712595   56490 start.go:316] joinCluster: &{Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m05 IP:192.169.0.55 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:29:41.712696   56490 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0505 14:29:41.712711   56490 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:29:41.712805   56490 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:29:41.712928   56490 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:29:41.713033   56490 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:29:41.713145   56490 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:29:41.868613   56490 start.go:342] trying to join control-plane node "m05" to cluster: &{Name:m05 IP:192.169.0.55 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:true Worker:true}
	I0505 14:29:41.868668   56490 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qcw5v4.o3850dx1y730r9g4 --discovery-token-ca-cert-hash sha256:bb322c15c0e2c5248019c1a3f3e860e3246513e220b7324b93ddb5b59ae2d57a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-671000-m05 --control-plane --apiserver-advertise-address=192.169.0.55 --apiserver-bind-port=8443"
	I0505 14:32:07.891892   56490 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qcw5v4.o3850dx1y730r9g4 --discovery-token-ca-cert-hash sha256:bb322c15c0e2c5248019c1a3f3e860e3246513e220b7324b93ddb5b59ae2d57a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-671000-m05 --control-plane --apiserver-advertise-address=192.169.0.55 --apiserver-bind-port=8443": (2m26.024585s)
	E0505 14:32:07.891941   56490 start.go:344] control-plane node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qcw5v4.o3850dx1y730r9g4 --discovery-token-ca-cert-hash sha256:bb322c15c0e2c5248019c1a3f3e860e3246513e220b7324b93ddb5b59ae2d57a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-671000-m05 --control-plane --apiserver-advertise-address=192.169.0.55 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	[preflight] Running pre-flight checks before initializing the new control plane instance
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-671000-m05 localhost] and IPs [192.169.0.55 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-671000-m05 localhost] and IPs [192.169.0.55 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Using the existing "apiserver" certificate and key
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Valid certificates and keys now exist in "/var/lib/minikube/certs"
	[certs] Using the existing "sa" key
	[kubeconfig] Generating kubeconfig files
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[check-etcd] Checking that the etcd cluster is healthy
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://192.169.0.53:2379 with maintenance client: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	I0505 14:32:07.891961   56490 start.go:347] resetting control-plane node "m05" before attempting to rejoin cluster...
	I0505 14:32:07.891974   56490 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force"
	I0505 14:32:07.980001   56490 start.go:351] successfully reset control-plane node "m05"
	I0505 14:32:07.980040   56490 retry.go:31] will retry after 14.761993608s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qcw5v4.o3850dx1y730r9g4 --discovery-token-ca-cert-hash sha256:bb322c15c0e2c5248019c1a3f3e860e3246513e220b7324b93ddb5b59ae2d57a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-671000-m05 --control-plane --apiserver-advertise-address=192.169.0.55 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	[preflight] Running pre-flight checks before initializing the new control plane instance
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-671000-m05 localhost] and IPs [192.169.0.55 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-671000-m05 localhost] and IPs [192.169.0.55 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Using the existing "apiserver" certificate and key
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Valid certificates and keys now exist in "/var/lib/minikube/certs"
	[certs] Using the existing "sa" key
	[kubeconfig] Generating kubeconfig files
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[check-etcd] Checking that the etcd cluster is healthy
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://192.169.0.53:2379 with maintenance client: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	I0505 14:32:22.742551   56490 start.go:342] trying to join control-plane node "m05" to cluster: &{Name:m05 IP:192.169.0.55 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:true Worker:true}
	I0505 14:32:22.742624   56490 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qcw5v4.o3850dx1y730r9g4 --discovery-token-ca-cert-hash sha256:bb322c15c0e2c5248019c1a3f3e860e3246513e220b7324b93ddb5b59ae2d57a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-671000-m05 --control-plane --apiserver-advertise-address=192.169.0.55 --apiserver-bind-port=8443"
	I0505 14:34:25.591190   56490 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qcw5v4.o3850dx1y730r9g4 --discovery-token-ca-cert-hash sha256:bb322c15c0e2c5248019c1a3f3e860e3246513e220b7324b93ddb5b59ae2d57a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-671000-m05 --control-plane --apiserver-advertise-address=192.169.0.55 --apiserver-bind-port=8443": (2m2.849704374s)
	E0505 14:34:25.591237   56490 start.go:344] control-plane node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qcw5v4.o3850dx1y730r9g4 --discovery-token-ca-cert-hash sha256:bb322c15c0e2c5248019c1a3f3e860e3246513e220b7324b93ddb5b59ae2d57a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-671000-m05 --control-plane --apiserver-advertise-address=192.169.0.55 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	[preflight] Running pre-flight checks before initializing the new control plane instance
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using the existing "apiserver-kubelet-client" certificate and key
	[certs] Using the existing "apiserver" certificate and key
	[certs] Using the existing "front-proxy-client" certificate and key
	[certs] Using the existing "etcd/server" certificate and key
	[certs] Using the existing "etcd/peer" certificate and key
	[certs] Using the existing "etcd/healthcheck-client" certificate and key
	[certs] Using the existing "apiserver-etcd-client" certificate and key
	[certs] Valid certificates and keys now exist in "/var/lib/minikube/certs"
	[certs] Using the existing "sa" key
	[kubeconfig] Generating kubeconfig files
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[check-etcd] Checking that the etcd cluster is healthy
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://192.169.0.53:2379 with maintenance client: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	I0505 14:34:25.591251   56490 start.go:347] resetting control-plane node "m05" before attempting to rejoin cluster...
	I0505 14:34:25.591262   56490 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force"
	I0505 14:34:25.673956   56490 start.go:351] successfully reset control-plane node "m05"
	I0505 14:34:25.673986   56490 start.go:318] duration metric: took 4m43.964097221s to joinCluster
	I0505 14:34:25.696000   56490 out.go:177] 
	W0505 14:34:25.716767   56490 out.go:239] X Exiting due to GUEST_NODE_ADD: failed to add node: join node to cluster: error joining control-plane node "m05" to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qcw5v4.o3850dx1y730r9g4 --discovery-token-ca-cert-hash sha256:bb322c15c0e2c5248019c1a3f3e860e3246513e220b7324b93ddb5b59ae2d57a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-671000-m05 --control-plane --apiserver-advertise-address=192.169.0.55 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	[preflight] Running pre-flight checks before initializing the new control plane instance
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using the existing "apiserver-kubelet-client" certificate and key
	[certs] Using the existing "apiserver" certificate and key
	[certs] Using the existing "front-proxy-client" certificate and key
	[certs] Using the existing "etcd/server" certificate and key
	[certs] Using the existing "etcd/peer" certificate and key
	[certs] Using the existing "etcd/healthcheck-client" certificate and key
	[certs] Using the existing "apiserver-etcd-client" certificate and key
	[certs] Valid certificates and keys now exist in "/var/lib/minikube/certs"
	[certs] Using the existing "sa" key
	[kubeconfig] Generating kubeconfig files
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[check-etcd] Checking that the etcd cluster is healthy
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://192.169.0.53:2379 with maintenance client: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_NODE_ADD: failed to add node: join node to cluster: error joining control-plane node "m05" to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qcw5v4.o3850dx1y730r9g4 --discovery-token-ca-cert-hash sha256:bb322c15c0e2c5248019c1a3f3e860e3246513e220b7324b93ddb5b59ae2d57a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-671000-m05 --control-plane --apiserver-advertise-address=192.169.0.55 --apiserver-bind-port=8443": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	[preflight] Running pre-flight checks before initializing the new control plane instance
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using the existing "apiserver-kubelet-client" certificate and key
	[certs] Using the existing "apiserver" certificate and key
	[certs] Using the existing "front-proxy-client" certificate and key
	[certs] Using the existing "etcd/server" certificate and key
	[certs] Using the existing "etcd/peer" certificate and key
	[certs] Using the existing "etcd/healthcheck-client" certificate and key
	[certs] Using the existing "apiserver-etcd-client" certificate and key
	[certs] Valid certificates and keys now exist in "/var/lib/minikube/certs"
	[certs] Using the existing "sa" key
	[kubeconfig] Generating kubeconfig files
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[check-etcd] Checking that the etcd cluster is healthy
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://192.169.0.53:2379 with maintenance client: context deadline exceeded
	To see the stack trace of this error execute with --v=5 or higher
	
	W0505 14:34:25.716793   56490 out.go:239] * 
	* 
	W0505 14:34:25.729800   56490 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 14:34:25.750807   56490 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-amd64 node add -p ha-671000 --control-plane -v=7 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-671000 -n ha-671000
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-671000 logs -n 25: (3.18195452s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n ha-671000-m04 sudo cat                                                                                      | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | /home/docker/cp-test_ha-671000-m03_ha-671000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-671000 cp testdata/cp-test.txt                                                                                            | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile4235302821/001/cp-test_ha-671000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000:/home/docker/cp-test_ha-671000-m04_ha-671000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n ha-671000 sudo cat                                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | /home/docker/cp-test_ha-671000-m04_ha-671000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m02:/home/docker/cp-test_ha-671000-m04_ha-671000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n ha-671000-m02 sudo cat                                                                                      | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | /home/docker/cp-test_ha-671000-m04_ha-671000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m03:/home/docker/cp-test_ha-671000-m04_ha-671000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | ha-671000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-671000 ssh -n ha-671000-m03 sudo cat                                                                                      | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | /home/docker/cp-test_ha-671000-m04_ha-671000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-671000 node stop m02 -v=7                                                                                                 | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:19 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-671000 node start m02 -v=7                                                                                                | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:19 PDT | 05 May 24 14:20 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-671000 -v=7                                                                                                       | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:20 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-671000 -v=7                                                                                                            | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:20 PDT | 05 May 24 14:20 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-671000 --wait=true -v=7                                                                                                | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:20 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-671000                                                                                                            | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:23 PDT |                     |
	| node    | ha-671000 node delete m03 -v=7                                                                                               | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:23 PDT | 05 May 24 14:23 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-671000 stop -v=7                                                                                                          | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:24 PDT | 05 May 24 14:25 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-671000 --wait=true                                                                                                     | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:25 PDT | 05 May 24 14:27 PDT |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-671000                                                                                                             | ha-671000 | jenkins | v1.33.0 | 05 May 24 14:27 PDT |                     |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 14:25:35
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 14:25:35.164334   56422 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:25:35.164619   56422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:25:35.164624   56422 out.go:304] Setting ErrFile to fd 2...
	I0505 14:25:35.164628   56422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:25:35.164813   56422 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
	I0505 14:25:35.166280   56422 out.go:298] Setting JSON to false
	I0505 14:25:35.188180   56422 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":19506,"bootTime":1714924829,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0505 14:25:35.188264   56422 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:25:35.209600   56422 out.go:177] * [ha-671000] minikube v1.33.0 on Darwin 14.4.1
	I0505 14:25:35.251421   56422 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:25:35.251479   56422 notify.go:220] Checking for updates...
	I0505 14:25:35.294257   56422 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:25:35.315283   56422 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0505 14:25:35.335996   56422 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:25:35.357428   56422 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
	I0505 14:25:35.378364   56422 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:25:35.401598   56422 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:25:35.402296   56422 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:25:35.402373   56422 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:25:35.411939   56422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58081
	I0505 14:25:35.412257   56422 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:25:35.412680   56422 main.go:141] libmachine: Using API Version  1
	I0505 14:25:35.412689   56422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:25:35.412913   56422 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:25:35.413100   56422 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:25:35.413332   56422 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:25:35.413571   56422 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:25:35.413594   56422 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:25:35.422034   56422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58083
	I0505 14:25:35.422362   56422 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:25:35.422727   56422 main.go:141] libmachine: Using API Version  1
	I0505 14:25:35.422740   56422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:25:35.422962   56422 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:25:35.423067   56422 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:25:35.450398   56422 out.go:177] * Using the hyperkit driver based on existing profile
	I0505 14:25:35.492368   56422 start.go:297] selected driver: hyperkit
	I0505 14:25:35.492398   56422 start.go:901] validating driver "hyperkit" against &{Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingre
ss:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:25:35.492671   56422 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:25:35.492852   56422 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:25:35.493049   56422 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18602-53665/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0505 14:25:35.502785   56422 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0505 14:25:35.506689   56422 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:25:35.506711   56422 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0505 14:25:35.509367   56422 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:25:35.509439   56422 cni.go:84] Creating CNI manager for ""
	I0505 14:25:35.509447   56422 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0505 14:25:35.509529   56422 start.go:340] cluster config:
	{Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:fa
lse kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:25:35.509635   56422 iso.go:125] acquiring lock: {Name:mk0da19ac8d2d553b5039d86a6857a5ca35625a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 14:25:35.551145   56422 out.go:177] * Starting "ha-671000" primary control-plane node in "ha-671000" cluster
	I0505 14:25:35.572347   56422 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:25:35.572424   56422 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0505 14:25:35.572448   56422 cache.go:56] Caching tarball of preloaded images
	I0505 14:25:35.572639   56422 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0505 14:25:35.572657   56422 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:25:35.572855   56422 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:25:35.573845   56422 start.go:360] acquireMachinesLock for ha-671000: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:25:35.573966   56422 start.go:364] duration metric: took 95.515µs to acquireMachinesLock for "ha-671000"
	I0505 14:25:35.574000   56422 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:25:35.574035   56422 fix.go:54] fixHost starting: 
	I0505 14:25:35.574437   56422 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:25:35.574467   56422 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:25:35.583595   56422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58085
	I0505 14:25:35.583939   56422 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:25:35.584323   56422 main.go:141] libmachine: Using API Version  1
	I0505 14:25:35.584341   56422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:25:35.584578   56422 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:25:35.584710   56422 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:25:35.584820   56422 main.go:141] libmachine: (ha-671000) Calling .GetState
	I0505 14:25:35.584919   56422 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:25:35.584996   56422 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 56275
	I0505 14:25:35.585928   56422 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid 56275 missing from process table
	I0505 14:25:35.585968   56422 fix.go:112] recreateIfNeeded on ha-671000: state=Stopped err=<nil>
	I0505 14:25:35.585986   56422 main.go:141] libmachine: (ha-671000) Calling .DriverName
	W0505 14:25:35.586087   56422 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:25:35.628147   56422 out.go:177] * Restarting existing hyperkit VM for "ha-671000" ...
	I0505 14:25:35.651402   56422 main.go:141] libmachine: (ha-671000) Calling .Start
	I0505 14:25:35.651711   56422 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:25:35.651761   56422 main.go:141] libmachine: (ha-671000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid
	I0505 14:25:35.653533   56422 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid 56275 missing from process table
	I0505 14:25:35.653551   56422 main.go:141] libmachine: (ha-671000) DBG | pid 56275 is in state "Stopped"
	I0505 14:25:35.653568   56422 main.go:141] libmachine: (ha-671000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid...
	I0505 14:25:35.653751   56422 main.go:141] libmachine: (ha-671000) DBG | Using UUID 9389e317-b0a3-4e2d-8cc9-aa1a138ddf96
	I0505 14:25:35.769506   56422 main.go:141] libmachine: (ha-671000) DBG | Generated MAC 72:52:a3:7d:5c:d1
	I0505 14:25:35.769531   56422 main.go:141] libmachine: (ha-671000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
	I0505 14:25:35.769664   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:35 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9389e317-b0a3-4e2d-8cc9-aa1a138ddf96", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a4840)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:25:35.769690   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:35 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9389e317-b0a3-4e2d-8cc9-aa1a138ddf96", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a4840)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:25:35.769731   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:35 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9389e317-b0a3-4e2d-8cc9-aa1a138ddf96", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/ha-671000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
	I0505 14:25:35.769765   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:35 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9389e317-b0a3-4e2d-8cc9-aa1a138ddf96 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/ha-671000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
	I0505 14:25:35.769785   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:35 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0505 14:25:35.771199   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:35 DEBUG: hyperkit: Pid is 56435
	I0505 14:25:35.771660   56422 main.go:141] libmachine: (ha-671000) DBG | Attempt 0
	I0505 14:25:35.771685   56422 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:25:35.771797   56422 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 56435
	I0505 14:25:35.773677   56422 main.go:141] libmachine: (ha-671000) DBG | Searching for 72:52:a3:7d:5c:d1 in /var/db/dhcpd_leases ...
	I0505 14:25:35.773822   56422 main.go:141] libmachine: (ha-671000) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0505 14:25:35.773848   56422 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x66394a07}
	I0505 14:25:35.773880   56422 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x663949ce}
	I0505 14:25:35.773896   56422 main.go:141] libmachine: (ha-671000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x663949ba}
	I0505 14:25:35.773910   56422 main.go:141] libmachine: (ha-671000) DBG | Found match: 72:52:a3:7d:5c:d1
	I0505 14:25:35.773922   56422 main.go:141] libmachine: (ha-671000) DBG | IP: 192.169.0.51
	I0505 14:25:35.773960   56422 main.go:141] libmachine: (ha-671000) Calling .GetConfigRaw
	I0505 14:25:35.774679   56422 main.go:141] libmachine: (ha-671000) Calling .GetIP
	I0505 14:25:35.774890   56422 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:25:35.775302   56422 machine.go:94] provisionDockerMachine start ...
	I0505 14:25:35.775312   56422 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:25:35.775447   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:25:35.775550   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:25:35.775644   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:35.775738   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:35.775825   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:25:35.775936   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:25:35.776154   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:25:35.776164   56422 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 14:25:35.779199   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:35 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0505 14:25:35.831096   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:35 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0505 14:25:35.831811   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:25:35.831828   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:25:35.831842   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:25:35.831850   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:35 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:25:36.216112   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:36 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0505 14:25:36.216136   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:36 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0505 14:25:36.330589   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:25:36.330614   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:25:36.330647   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:25:36.330665   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:25:36.331478   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:36 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0505 14:25:36.331491   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:36 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0505 14:25:41.613048   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:41 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0505 14:25:41.613122   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:41 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0505 14:25:41.613133   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:41 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0505 14:25:41.637769   56422 main.go:141] libmachine: (ha-671000) DBG | 2024/05/05 14:25:41 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0505 14:25:46.842239   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 14:25:46.842253   56422 main.go:141] libmachine: (ha-671000) Calling .GetMachineName
	I0505 14:25:46.842429   56422 buildroot.go:166] provisioning hostname "ha-671000"
	I0505 14:25:46.842438   56422 main.go:141] libmachine: (ha-671000) Calling .GetMachineName
	I0505 14:25:46.842533   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:25:46.842631   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:25:46.842738   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:46.842841   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:46.842928   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:25:46.843067   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:25:46.843200   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:25:46.843213   56422 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671000 && echo "ha-671000" | sudo tee /etc/hostname
	I0505 14:25:46.908773   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000
	
	I0505 14:25:46.908791   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:25:46.908938   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:25:46.909033   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:46.909121   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:46.909204   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:25:46.909326   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:25:46.909482   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:25:46.909493   56422 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 14:25:46.971785   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 14:25:46.971804   56422 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
	I0505 14:25:46.971819   56422 buildroot.go:174] setting up certificates
	I0505 14:25:46.971825   56422 provision.go:84] configureAuth start
	I0505 14:25:46.971831   56422 main.go:141] libmachine: (ha-671000) Calling .GetMachineName
	I0505 14:25:46.971961   56422 main.go:141] libmachine: (ha-671000) Calling .GetIP
	I0505 14:25:46.972061   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:25:46.972159   56422 provision.go:143] copyHostCerts
	I0505 14:25:46.972196   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:25:46.972276   56422 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
	I0505 14:25:46.972285   56422 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:25:46.972442   56422 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
	I0505 14:25:46.972669   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:25:46.972709   56422 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
	I0505 14:25:46.972714   56422 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:25:46.972802   56422 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
	I0505 14:25:46.972952   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:25:46.972989   56422 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
	I0505 14:25:46.972994   56422 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:25:46.973068   56422 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
	I0505 14:25:46.973217   56422 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000 san=[127.0.0.1 192.169.0.51 ha-671000 localhost minikube]
	I0505 14:25:47.182478   56422 provision.go:177] copyRemoteCerts
	I0505 14:25:47.182541   56422 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 14:25:47.182556   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:25:47.182791   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:25:47.182876   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:47.183002   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:25:47.183146   56422 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:25:47.219014   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 14:25:47.219117   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 14:25:47.238378   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 14:25:47.238444   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0505 14:25:47.257608   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 14:25:47.257670   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0505 14:25:47.276487   56422 provision.go:87] duration metric: took 304.654266ms to configureAuth
	I0505 14:25:47.276499   56422 buildroot.go:189] setting minikube options for container-runtime
	I0505 14:25:47.276667   56422 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:25:47.276680   56422 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:25:47.276806   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:25:47.276901   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:25:47.276988   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:47.277070   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:47.277137   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:25:47.277250   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:25:47.277370   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:25:47.277378   56422 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 14:25:47.333292   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 14:25:47.333309   56422 buildroot.go:70] root file system type: tmpfs
	I0505 14:25:47.333385   56422 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 14:25:47.333398   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:25:47.333546   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:25:47.333658   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:47.333767   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:47.333869   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:25:47.334004   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:25:47.334141   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:25:47.334189   56422 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 14:25:47.400793   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 14:25:47.400817   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:25:47.400948   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:25:47.401045   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:47.401141   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:47.401221   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:25:47.401345   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:25:47.401482   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:25:47.401494   56422 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 14:25:49.054452   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0505 14:25:49.054468   56422 machine.go:97] duration metric: took 13.279285403s to provisionDockerMachine
	I0505 14:25:49.054475   56422 start.go:293] postStartSetup for "ha-671000" (driver="hyperkit")
	I0505 14:25:49.054482   56422 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 14:25:49.054491   56422 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:25:49.054675   56422 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 14:25:49.054690   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:25:49.054797   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:25:49.054895   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:49.054986   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:25:49.055074   56422 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:25:49.090601   56422 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 14:25:49.093621   56422 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 14:25:49.093634   56422 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
	I0505 14:25:49.093729   56422 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
	I0505 14:25:49.093917   56422 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
	I0505 14:25:49.093924   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
	I0505 14:25:49.094127   56422 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 14:25:49.101802   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:25:49.120552   56422 start.go:296] duration metric: took 66.070779ms for postStartSetup
	I0505 14:25:49.120571   56422 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:25:49.120738   56422 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0505 14:25:49.120750   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:25:49.120841   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:25:49.120932   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:49.121015   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:25:49.121103   56422 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:25:49.156586   56422 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0505 14:25:49.156638   56422 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0505 14:25:49.188562   56422 fix.go:56] duration metric: took 13.614678118s for fixHost
	I0505 14:25:49.188584   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:25:49.188724   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:25:49.188851   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:49.188953   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:49.189044   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:25:49.189184   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:25:49.189326   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.51 22 <nil> <nil>}
	I0505 14:25:49.189334   56422 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 14:25:49.244738   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944349.131326569
	
	I0505 14:25:49.244750   56422 fix.go:216] guest clock: 1714944349.131326569
	I0505 14:25:49.244755   56422 fix.go:229] Guest: 2024-05-05 14:25:49.131326569 -0700 PDT Remote: 2024-05-05 14:25:49.188574 -0700 PDT m=+14.068007527 (delta=-57.247431ms)
	I0505 14:25:49.244777   56422 fix.go:200] guest clock delta is within tolerance: -57.247431ms
	I0505 14:25:49.244784   56422 start.go:83] releasing machines lock for "ha-671000", held for 13.67093787s
	I0505 14:25:49.244802   56422 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:25:49.244933   56422 main.go:141] libmachine: (ha-671000) Calling .GetIP
	I0505 14:25:49.245026   56422 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:25:49.245320   56422 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:25:49.245412   56422 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:25:49.245536   56422 ssh_runner.go:195] Run: cat /version.json
	I0505 14:25:49.245547   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:25:49.245643   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:25:49.245712   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:49.245788   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:25:49.245876   56422 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:25:49.246140   56422 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 14:25:49.246167   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:25:49.246256   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:25:49.246336   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:25:49.246414   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:25:49.246484   56422 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:25:49.280946   56422 ssh_runner.go:195] Run: systemctl --version
	I0505 14:25:49.285321   56422 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 14:25:49.344825   56422 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 14:25:49.344940   56422 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 14:25:49.360506   56422 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 14:25:49.360519   56422 start.go:494] detecting cgroup driver to use...
	I0505 14:25:49.360633   56422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:25:49.378190   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0505 14:25:49.386904   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 14:25:49.395742   56422 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 14:25:49.395782   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 14:25:49.404515   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:25:49.413467   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 14:25:49.422087   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:25:49.430782   56422 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 14:25:49.439666   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 14:25:49.448369   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 14:25:49.457064   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 14:25:49.465886   56422 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 14:25:49.473952   56422 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 14:25:49.482198   56422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:25:49.582894   56422 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 14:25:49.601041   56422 start.go:494] detecting cgroup driver to use...
	I0505 14:25:49.601118   56422 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 14:25:49.621029   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:25:49.635361   56422 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 14:25:49.657744   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:25:49.670370   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:25:49.686742   56422 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0505 14:25:49.707246   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:25:49.717891   56422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:25:49.733212   56422 ssh_runner.go:195] Run: which cri-dockerd
	I0505 14:25:49.736243   56422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 14:25:49.743428   56422 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 14:25:49.757383   56422 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 14:25:49.854301   56422 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 14:25:49.966467   56422 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 14:25:49.966554   56422 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 14:25:49.980618   56422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:25:50.085426   56422 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:25:52.366230   56422 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.280806594s)
	I0505 14:25:52.366290   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0505 14:25:52.377183   56422 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0505 14:25:52.389676   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:25:52.400051   56422 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0505 14:25:52.494255   56422 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0505 14:25:52.601753   56422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:25:52.708962   56422 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0505 14:25:52.722436   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:25:52.732676   56422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:25:52.832260   56422 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0505 14:25:52.896555   56422 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0505 14:25:52.896635   56422 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0505 14:25:52.901002   56422 start.go:562] Will wait 60s for crictl version
	I0505 14:25:52.901053   56422 ssh_runner.go:195] Run: which crictl
	I0505 14:25:52.904944   56422 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 14:25:52.930224   56422 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0505 14:25:52.930307   56422 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:25:52.946395   56422 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:25:52.985876   56422 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0505 14:25:52.985925   56422 main.go:141] libmachine: (ha-671000) Calling .GetIP
	I0505 14:25:52.986347   56422 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0505 14:25:52.990955   56422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:25:53.000839   56422 kubeadm.go:877] updating cluster {Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 14:25:53.000921   56422 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:25:53.000980   56422 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0505 14:25:53.012664   56422 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	ghcr.io/kube-vip/kube-vip:v0.7.1
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0505 14:25:53.012675   56422 docker.go:615] Images already preloaded, skipping extraction
	I0505 14:25:53.012746   56422 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0505 14:25:53.034471   56422 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	ghcr.io/kube-vip/kube-vip:v0.7.1
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0505 14:25:53.034491   56422 cache_images.go:84] Images are preloaded, skipping loading
	I0505 14:25:53.034502   56422 kubeadm.go:928] updating node { 192.169.0.51 8443 v1.30.0 docker true true} ...
	I0505 14:25:53.034575   56422 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-671000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 14:25:53.034645   56422 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0505 14:25:53.052266   56422 cni.go:84] Creating CNI manager for ""
	I0505 14:25:53.052280   56422 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0505 14:25:53.052296   56422 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 14:25:53.052316   56422 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.51 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-671000 NodeName:ha-671000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.51"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.51 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 14:25:53.052423   56422 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.51
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-671000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.51
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.51"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 14:25:53.052441   56422 kube-vip.go:111] generating kube-vip config ...
	I0505 14:25:53.052489   56422 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 14:25:53.065198   56422 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 14:25:53.065277   56422 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 14:25:53.065327   56422 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 14:25:53.073362   56422 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 14:25:53.073406   56422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0505 14:25:53.080835   56422 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0505 14:25:53.094484   56422 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 14:25:53.107750   56422 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0505 14:25:53.121584   56422 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0505 14:25:53.135288   56422 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0505 14:25:53.138297   56422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:25:53.147793   56422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:25:53.241037   56422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:25:53.255942   56422 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000 for IP: 192.169.0.51
	I0505 14:25:53.255956   56422 certs.go:194] generating shared ca certs ...
	I0505 14:25:53.255969   56422 certs.go:226] acquiring lock for ca certs: {Name:mk4a4c4cb11dfd06f304e9c6007de9e5e149a466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:25:53.256159   56422 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key
	I0505 14:25:53.256229   56422 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key
	I0505 14:25:53.256240   56422 certs.go:256] generating profile certs ...
	I0505 14:25:53.256352   56422 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key
	I0505 14:25:53.256375   56422 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.93e67ec5
	I0505 14:25:53.256390   56422 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.93e67ec5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.51 192.169.0.52 192.169.0.254]
	I0505 14:25:53.381899   56422 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.93e67ec5 ...
	I0505 14:25:53.381922   56422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.93e67ec5: {Name:mk2b7a5d7a2844a9b91834a6fcb1c8c36127fba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:25:53.382278   56422 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.93e67ec5 ...
	I0505 14:25:53.382292   56422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.93e67ec5: {Name:mk797a41105a0e48429afe55cbac1cf6186c58da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:25:53.382515   56422 certs.go:381] copying /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt.93e67ec5 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt
	I0505 14:25:53.382743   56422 certs.go:385] copying /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.93e67ec5 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key
	I0505 14:25:53.383006   56422 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key
	I0505 14:25:53.383016   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 14:25:53.383039   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 14:25:53.383058   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 14:25:53.383078   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 14:25:53.383096   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 14:25:53.383114   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 14:25:53.383132   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 14:25:53.383148   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 14:25:53.383238   56422 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem (1338 bytes)
	W0505 14:25:53.383284   56422 certs.go:480] ignoring /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210_empty.pem, impossibly tiny 0 bytes
	I0505 14:25:53.383292   56422 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 14:25:53.383322   56422 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem (1078 bytes)
	I0505 14:25:53.383351   56422 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem (1123 bytes)
	I0505 14:25:53.383379   56422 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem (1679 bytes)
	I0505 14:25:53.383441   56422 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:25:53.383473   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:25:53.383494   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem -> /usr/share/ca-certificates/54210.pem
	I0505 14:25:53.383512   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /usr/share/ca-certificates/542102.pem
	I0505 14:25:53.383916   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 14:25:53.419300   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 14:25:53.452078   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 14:25:53.496582   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 14:25:53.544622   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0505 14:25:53.595857   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 14:25:53.632874   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 14:25:53.668549   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 14:25:53.706369   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 14:25:53.744828   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem --> /usr/share/ca-certificates/54210.pem (1338 bytes)
	I0505 14:25:53.766995   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /usr/share/ca-certificates/542102.pem (1708 bytes)
	I0505 14:25:53.787130   56422 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 14:25:53.801000   56422 ssh_runner.go:195] Run: openssl version
	I0505 14:25:53.805268   56422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 14:25:53.813473   56422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:25:53.816883   56422 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:25:53.816916   56422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:25:53.821494   56422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 14:25:53.830231   56422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54210.pem && ln -fs /usr/share/ca-certificates/54210.pem /etc/ssl/certs/54210.pem"
	I0505 14:25:53.838495   56422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54210.pem
	I0505 14:25:53.841989   56422 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:08 /usr/share/ca-certificates/54210.pem
	I0505 14:25:53.842024   56422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54210.pem
	I0505 14:25:53.846271   56422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/54210.pem /etc/ssl/certs/51391683.0"
	I0505 14:25:53.854884   56422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/542102.pem && ln -fs /usr/share/ca-certificates/542102.pem /etc/ssl/certs/542102.pem"
	I0505 14:25:53.863444   56422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/542102.pem
	I0505 14:25:53.866862   56422 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:08 /usr/share/ca-certificates/542102.pem
	I0505 14:25:53.866897   56422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/542102.pem
	I0505 14:25:53.871073   56422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/542102.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 14:25:53.879908   56422 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 14:25:53.883387   56422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 14:25:53.887762   56422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 14:25:53.892158   56422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 14:25:53.896655   56422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 14:25:53.900990   56422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 14:25:53.905283   56422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 14:25:53.909602   56422 kubeadm.go:391] StartCluster: {Name:ha-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:25:53.909709   56422 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0505 14:25:53.920561   56422 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0505 14:25:53.928368   56422 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0505 14:25:53.928378   56422 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0505 14:25:53.928384   56422 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0505 14:25:53.928422   56422 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0505 14:25:53.936214   56422 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0505 14:25:53.936520   56422 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-671000" does not appear in /Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:25:53.936596   56422 kubeconfig.go:62] /Users/jenkins/minikube-integration/18602-53665/kubeconfig needs updating (will repair): [kubeconfig missing "ha-671000" cluster setting kubeconfig missing "ha-671000" context setting]
	I0505 14:25:53.936774   56422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/kubeconfig: {Name:mk07bec02cc3957a2a8800c4412eef88581455ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:25:53.937391   56422 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:25:53.937587   56422 kapi.go:59] client config for ha-671000: &rest.Config{Host:"https://192.169.0.51:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28d3220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0505 14:25:53.937899   56422 cert_rotation.go:137] Starting client certificate rotation controller
	I0505 14:25:53.938057   56422 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0505 14:25:53.945432   56422 kubeadm.go:624] The running cluster does not require reconfiguration: 192.169.0.51
	I0505 14:25:53.945443   56422 kubeadm.go:591] duration metric: took 17.056089ms to restartPrimaryControlPlane
	I0505 14:25:53.945449   56422 kubeadm.go:393] duration metric: took 35.850687ms to StartCluster
	I0505 14:25:53.945457   56422 settings.go:142] acquiring lock: {Name:mk42961bbb846d74d4f3eb396c3a07b16222feb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:25:53.945530   56422 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:25:53.945858   56422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/kubeconfig: {Name:mk07bec02cc3957a2a8800c4412eef88581455ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:25:53.946077   56422 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.51 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:25:53.946090   56422 start.go:240] waiting for startup goroutines ...
	I0505 14:25:53.946104   56422 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0505 14:25:53.990212   56422 out.go:177] * Enabled addons: 
	I0505 14:25:53.946204   56422 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:25:54.011128   56422 addons.go:510] duration metric: took 65.027726ms for enable addons: enabled=[]
	I0505 14:25:54.011215   56422 start.go:245] waiting for cluster config update ...
	I0505 14:25:54.011233   56422 start.go:254] writing updated cluster config ...
	I0505 14:25:54.033092   56422 out.go:177] 
	I0505 14:25:54.054702   56422 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:25:54.054826   56422 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:25:54.077110   56422 out.go:177] * Starting "ha-671000-m02" control-plane node in "ha-671000" cluster
	I0505 14:25:54.119123   56422 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:25:54.119159   56422 cache.go:56] Caching tarball of preloaded images
	I0505 14:25:54.119327   56422 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0505 14:25:54.119345   56422 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:25:54.119478   56422 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:25:54.120384   56422 start.go:360] acquireMachinesLock for ha-671000-m02: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:25:54.120496   56422 start.go:364] duration metric: took 85.202µs to acquireMachinesLock for "ha-671000-m02"
	I0505 14:25:54.120522   56422 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:25:54.120531   56422 fix.go:54] fixHost starting: m02
	I0505 14:25:54.120946   56422 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:25:54.120964   56422 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:25:54.130125   56422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58107
	I0505 14:25:54.130462   56422 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:25:54.130881   56422 main.go:141] libmachine: Using API Version  1
	I0505 14:25:54.130895   56422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:25:54.131145   56422 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:25:54.131283   56422 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:25:54.131380   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetState
	I0505 14:25:54.131460   56422 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:25:54.131545   56422 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid from json: 56285
	I0505 14:25:54.132478   56422 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid 56285 missing from process table
	I0505 14:25:54.132504   56422 fix.go:112] recreateIfNeeded on ha-671000-m02: state=Stopped err=<nil>
	I0505 14:25:54.132516   56422 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	W0505 14:25:54.132599   56422 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:25:54.174992   56422 out.go:177] * Restarting existing hyperkit VM for "ha-671000-m02" ...
	I0505 14:25:54.196203   56422 main.go:141] libmachine: (ha-671000-m02) Calling .Start
	I0505 14:25:54.196469   56422 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:25:54.196517   56422 main.go:141] libmachine: (ha-671000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid
	I0505 14:25:54.198282   56422 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid 56285 missing from process table
	I0505 14:25:54.198298   56422 main.go:141] libmachine: (ha-671000-m02) DBG | pid 56285 is in state "Stopped"
	I0505 14:25:54.198316   56422 main.go:141] libmachine: (ha-671000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid...
	I0505 14:25:54.198708   56422 main.go:141] libmachine: (ha-671000-m02) DBG | Using UUID 294bfc97-3e6f-4d68-b3f3-54381951a5e8
	I0505 14:25:54.230694   56422 main.go:141] libmachine: (ha-671000-m02) DBG | Generated MAC 92:83:2c:36:f7:7d
	I0505 14:25:54.230724   56422 main.go:141] libmachine: (ha-671000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
	I0505 14:25:54.230894   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"294bfc97-3e6f-4d68-b3f3-54381951a5e8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6900)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:25:54.230933   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"294bfc97-3e6f-4d68-b3f3-54381951a5e8", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6900)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:25:54.231011   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "294bfc97-3e6f-4d68-b3f3-54381951a5e8", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/ha-671000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/
machines/ha-671000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
	I0505 14:25:54.231069   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 294bfc97-3e6f-4d68-b3f3-54381951a5e8 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/ha-671000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
	I0505 14:25:54.231099   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0505 14:25:54.232810   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 DEBUG: hyperkit: Pid is 56442
	I0505 14:25:54.233285   56422 main.go:141] libmachine: (ha-671000-m02) DBG | Attempt 0
	I0505 14:25:54.233307   56422 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:25:54.233340   56422 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid from json: 56442
	I0505 14:25:54.235409   56422 main.go:141] libmachine: (ha-671000-m02) DBG | Searching for 92:83:2c:36:f7:7d in /var/db/dhcpd_leases ...
	I0505 14:25:54.235523   56422 main.go:141] libmachine: (ha-671000-m02) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0505 14:25:54.235544   56422 main.go:141] libmachine: (ha-671000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x66394ad8}
	I0505 14:25:54.235559   56422 main.go:141] libmachine: (ha-671000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x66394a07}
	I0505 14:25:54.235583   56422 main.go:141] libmachine: (ha-671000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x663949ce}
	I0505 14:25:54.235618   56422 main.go:141] libmachine: (ha-671000-m02) DBG | Found match: 92:83:2c:36:f7:7d
	I0505 14:25:54.235633   56422 main.go:141] libmachine: (ha-671000-m02) DBG | IP: 192.169.0.52
	I0505 14:25:54.235637   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetConfigRaw
	I0505 14:25:54.236325   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
	I0505 14:25:54.236590   56422 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:25:54.237001   56422 machine.go:94] provisionDockerMachine start ...
	I0505 14:25:54.237011   56422 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:25:54.237123   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:25:54.237210   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:25:54.237349   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:25:54.237447   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:25:54.237552   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:25:54.237696   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:25:54.237854   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:25:54.237867   56422 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 14:25:54.240483   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0505 14:25:54.251294   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0505 14:25:54.252390   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:25:54.252413   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:25:54.252422   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:25:54.252429   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:25:54.640528   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0505 14:25:54.640544   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0505 14:25:54.755475   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:25:54.755499   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:25:54.755507   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:25:54.755515   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:25:54.756292   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0505 14:25:54.756302   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:25:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0505 14:26:00.041679   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:26:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0505 14:26:00.041725   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:26:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0505 14:26:00.041761   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:26:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0505 14:26:00.065436   56422 main.go:141] libmachine: (ha-671000-m02) DBG | 2024/05/05 14:26:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0505 14:26:05.298074   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 14:26:05.298090   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetMachineName
	I0505 14:26:05.298217   56422 buildroot.go:166] provisioning hostname "ha-671000-m02"
	I0505 14:26:05.298229   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetMachineName
	I0505 14:26:05.298333   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:26:05.298432   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:26:05.298547   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:26:05.298648   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:26:05.298775   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:26:05.298893   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:26:05.299042   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:26:05.299052   56422 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671000-m02 && echo "ha-671000-m02" | sudo tee /etc/hostname
	I0505 14:26:05.363503   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000-m02
	
	I0505 14:26:05.363517   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:26:05.363649   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:26:05.363731   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:26:05.363839   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:26:05.363928   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:26:05.364058   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:26:05.364197   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:26:05.364208   56422 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 14:26:05.423590   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 14:26:05.423605   56422 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
	I0505 14:26:05.423614   56422 buildroot.go:174] setting up certificates
	I0505 14:26:05.423620   56422 provision.go:84] configureAuth start
	I0505 14:26:05.423627   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetMachineName
	I0505 14:26:05.423740   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
	I0505 14:26:05.423838   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:26:05.423925   56422 provision.go:143] copyHostCerts
	I0505 14:26:05.423952   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:26:05.423997   56422 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
	I0505 14:26:05.424002   56422 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:26:05.424157   56422 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
	I0505 14:26:05.424368   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:26:05.424398   56422 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
	I0505 14:26:05.424402   56422 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:26:05.424514   56422 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
	I0505 14:26:05.424679   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:26:05.424708   56422 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
	I0505 14:26:05.424713   56422 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:26:05.424778   56422 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
	I0505 14:26:05.424927   56422 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000-m02 san=[127.0.0.1 192.169.0.52 ha-671000-m02 localhost minikube]
	I0505 14:26:05.641265   56422 provision.go:177] copyRemoteCerts
	I0505 14:26:05.641316   56422 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 14:26:05.641330   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:26:05.641457   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:26:05.641565   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:26:05.641691   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:26:05.641788   56422 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	I0505 14:26:05.676812   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 14:26:05.676880   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0505 14:26:05.696655   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 14:26:05.696719   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 14:26:05.715548   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 14:26:05.715608   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0505 14:26:05.734834   56422 provision.go:87] duration metric: took 311.208496ms to configureAuth
	I0505 14:26:05.734849   56422 buildroot.go:189] setting minikube options for container-runtime
	I0505 14:26:05.735017   56422 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:26:05.735030   56422 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:26:05.735162   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:26:05.735244   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:26:05.735333   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:26:05.735409   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:26:05.735486   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:26:05.735591   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:26:05.735716   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:26:05.735723   56422 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 14:26:05.790525   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 14:26:05.790536   56422 buildroot.go:70] root file system type: tmpfs
	I0505 14:26:05.790625   56422 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 14:26:05.790635   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:26:05.790769   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:26:05.790864   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:26:05.790963   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:26:05.791053   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:26:05.791176   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:26:05.791311   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:26:05.791360   56422 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.51"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 14:26:05.857249   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.51
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 14:26:05.857268   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:26:05.857400   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:26:05.857490   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:26:05.857571   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:26:05.857663   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:26:05.857786   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:26:05.857927   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:26:05.857940   56422 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 14:26:07.522854   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0505 14:26:07.522868   56422 machine.go:97] duration metric: took 13.285986506s to provisionDockerMachine
	I0505 14:26:07.522876   56422 start.go:293] postStartSetup for "ha-671000-m02" (driver="hyperkit")
	I0505 14:26:07.522883   56422 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 14:26:07.522894   56422 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:26:07.523109   56422 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 14:26:07.523121   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:26:07.523241   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:26:07.523349   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:26:07.523449   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:26:07.523562   56422 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	I0505 14:26:07.557042   56422 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 14:26:07.560089   56422 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 14:26:07.560103   56422 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
	I0505 14:26:07.560190   56422 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
	I0505 14:26:07.560331   56422 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
	I0505 14:26:07.560337   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
	I0505 14:26:07.560487   56422 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 14:26:07.567612   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:26:07.587520   56422 start.go:296] duration metric: took 64.63701ms for postStartSetup
	I0505 14:26:07.587540   56422 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:26:07.587707   56422 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0505 14:26:07.587719   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:26:07.587815   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:26:07.587915   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:26:07.588008   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:26:07.588092   56422 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	I0505 14:26:07.623015   56422 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0505 14:26:07.623072   56422 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0505 14:26:07.676017   56422 fix.go:56] duration metric: took 13.555611887s for fixHost
	I0505 14:26:07.676047   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:26:07.676196   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:26:07.676288   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:26:07.676368   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:26:07.676447   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:26:07.676570   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:26:07.676733   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.52 22 <nil> <nil>}
	I0505 14:26:07.676743   56422 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 14:26:07.731361   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944367.884476387
	
	I0505 14:26:07.731379   56422 fix.go:216] guest clock: 1714944367.884476387
	I0505 14:26:07.731385   56422 fix.go:229] Guest: 2024-05-05 14:26:07.884476387 -0700 PDT Remote: 2024-05-05 14:26:07.676033 -0700 PDT m=+32.555642436 (delta=208.443387ms)
	I0505 14:26:07.731396   56422 fix.go:200] guest clock delta is within tolerance: 208.443387ms
	I0505 14:26:07.731401   56422 start.go:83] releasing machines lock for "ha-671000-m02", held for 13.611022721s
	I0505 14:26:07.731420   56422 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:26:07.731548   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
	I0505 14:26:07.754064   56422 out.go:177] * Found network options:
	I0505 14:26:07.773999   56422 out.go:177]   - NO_PROXY=192.169.0.51
	W0505 14:26:07.794859   56422 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 14:26:07.794896   56422 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:26:07.795743   56422 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:26:07.795989   56422 main.go:141] libmachine: (ha-671000-m02) Calling .DriverName
	I0505 14:26:07.796109   56422 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 14:26:07.796150   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	W0505 14:26:07.796202   56422 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 14:26:07.796314   56422 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0505 14:26:07.796336   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHHostname
	I0505 14:26:07.796338   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:26:07.796567   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHPort
	I0505 14:26:07.796587   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:26:07.796779   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:26:07.796836   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHKeyPath
	I0505 14:26:07.797021   56422 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	I0505 14:26:07.797070   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetSSHUsername
	I0505 14:26:07.797256   56422 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m02/id_rsa Username:docker}
	W0505 14:26:07.829679   56422 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 14:26:07.829735   56422 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 14:26:07.890417   56422 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 14:26:07.890445   56422 start.go:494] detecting cgroup driver to use...
	I0505 14:26:07.890598   56422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:26:07.906602   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0505 14:26:07.914818   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 14:26:07.922973   56422 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 14:26:07.923023   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 14:26:07.931368   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:26:07.939626   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 14:26:07.947645   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:26:07.955691   56422 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 14:26:07.963979   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 14:26:07.972016   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 14:26:07.980156   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 14:26:07.988329   56422 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 14:26:07.995826   56422 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 14:26:08.002983   56422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:26:08.097391   56422 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 14:26:08.116127   56422 start.go:494] detecting cgroup driver to use...
	I0505 14:26:08.116204   56422 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 14:26:08.130178   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:26:08.148361   56422 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 14:26:08.164449   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:26:08.176599   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:26:08.192971   56422 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0505 14:26:08.219211   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:26:08.232143   56422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:26:08.248664   56422 ssh_runner.go:195] Run: which cri-dockerd
	I0505 14:26:08.251508   56422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 14:26:08.258888   56422 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 14:26:08.272376   56422 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 14:26:08.367912   56422 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 14:26:08.485431   56422 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 14:26:08.485457   56422 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 14:26:08.499472   56422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:26:08.597077   56422 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:26:10.997549   56422 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.400475517s)
	I0505 14:26:10.997613   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0505 14:26:11.008831   56422 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0505 14:26:11.022651   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:26:11.033281   56422 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0505 14:26:11.129759   56422 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0505 14:26:11.237211   56422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:26:11.348068   56422 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0505 14:26:11.361770   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:26:11.372918   56422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:26:11.479824   56422 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0505 14:26:11.545990   56422 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0505 14:26:11.546073   56422 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0505 14:26:11.551940   56422 start.go:562] Will wait 60s for crictl version
	I0505 14:26:11.552002   56422 ssh_runner.go:195] Run: which crictl
	I0505 14:26:11.555112   56422 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 14:26:11.587866   56422 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0505 14:26:11.587939   56422 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:26:11.607280   56422 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:26:11.644760   56422 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0505 14:26:11.665518   56422 out.go:177]   - env NO_PROXY=192.169.0.51
	I0505 14:26:11.686666   56422 main.go:141] libmachine: (ha-671000-m02) Calling .GetIP
	I0505 14:26:11.687088   56422 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0505 14:26:11.691802   56422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:26:11.701860   56422 mustload.go:65] Loading cluster: ha-671000
	I0505 14:26:11.702029   56422 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:26:11.702241   56422 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:26:11.702257   56422 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:26:11.710873   56422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58129
	I0505 14:26:11.711207   56422 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:26:11.711505   56422 main.go:141] libmachine: Using API Version  1
	I0505 14:26:11.711515   56422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:26:11.711744   56422 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:26:11.711871   56422 main.go:141] libmachine: (ha-671000) Calling .GetState
	I0505 14:26:11.711959   56422 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:26:11.712029   56422 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 56435
	I0505 14:26:11.712963   56422 host.go:66] Checking if "ha-671000" exists ...
	I0505 14:26:11.713223   56422 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:26:11.713239   56422 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:26:11.722012   56422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58131
	I0505 14:26:11.722350   56422 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:26:11.722670   56422 main.go:141] libmachine: Using API Version  1
	I0505 14:26:11.722681   56422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:26:11.722908   56422 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:26:11.723031   56422 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:26:11.723122   56422 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000 for IP: 192.169.0.52
	I0505 14:26:11.723128   56422 certs.go:194] generating shared ca certs ...
	I0505 14:26:11.723147   56422 certs.go:226] acquiring lock for ca certs: {Name:mk4a4c4cb11dfd06f304e9c6007de9e5e149a466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:26:11.723295   56422 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key
	I0505 14:26:11.723353   56422 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key
	I0505 14:26:11.723362   56422 certs.go:256] generating profile certs ...
	I0505 14:26:11.723447   56422 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key
	I0505 14:26:11.723530   56422 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key.89cc21bc
	I0505 14:26:11.723579   56422 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key
	I0505 14:26:11.723586   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 14:26:11.723607   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 14:26:11.723632   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 14:26:11.723650   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 14:26:11.723667   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0505 14:26:11.723685   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0505 14:26:11.723707   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0505 14:26:11.723725   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0505 14:26:11.723819   56422 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem (1338 bytes)
	W0505 14:26:11.723857   56422 certs.go:480] ignoring /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210_empty.pem, impossibly tiny 0 bytes
	I0505 14:26:11.723865   56422 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 14:26:11.723898   56422 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem (1078 bytes)
	I0505 14:26:11.723928   56422 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem (1123 bytes)
	I0505 14:26:11.723958   56422 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem (1679 bytes)
	I0505 14:26:11.724030   56422 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:26:11.724073   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem -> /usr/share/ca-certificates/54210.pem
	I0505 14:26:11.724094   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /usr/share/ca-certificates/542102.pem
	I0505 14:26:11.724112   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:26:11.724137   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:26:11.724224   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:26:11.724314   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:26:11.724394   56422 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:26:11.724483   56422 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:26:11.751999   56422 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0505 14:26:11.755189   56422 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0505 14:26:11.763230   56422 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0505 14:26:11.766302   56422 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0505 14:26:11.774080   56422 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0505 14:26:11.777211   56422 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0505 14:26:11.785606   56422 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0505 14:26:11.788759   56422 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0505 14:26:11.796725   56422 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0505 14:26:11.799932   56422 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0505 14:26:11.807789   56422 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0505 14:26:11.811039   56422 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0505 14:26:11.819083   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 14:26:11.839310   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 14:26:11.859043   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 14:26:11.878668   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 14:26:11.897964   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0505 14:26:11.917139   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0505 14:26:11.936603   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 14:26:11.956030   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 14:26:11.975635   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem --> /usr/share/ca-certificates/54210.pem (1338 bytes)
	I0505 14:26:11.995079   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /usr/share/ca-certificates/542102.pem (1708 bytes)
	I0505 14:26:12.014303   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 14:26:12.033653   56422 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0505 14:26:12.047622   56422 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0505 14:26:12.061204   56422 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0505 14:26:12.074928   56422 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0505 14:26:12.088662   56422 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0505 14:26:12.102434   56422 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0505 14:26:12.116174   56422 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0505 14:26:12.130018   56422 ssh_runner.go:195] Run: openssl version
	I0505 14:26:12.134294   56422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 14:26:12.143291   56422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:26:12.146682   56422 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:26:12.146724   56422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:26:12.151032   56422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 14:26:12.160185   56422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54210.pem && ln -fs /usr/share/ca-certificates/54210.pem /etc/ssl/certs/54210.pem"
	I0505 14:26:12.169207   56422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54210.pem
	I0505 14:26:12.172565   56422 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:08 /usr/share/ca-certificates/54210.pem
	I0505 14:26:12.172601   56422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54210.pem
	I0505 14:26:12.176890   56422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/54210.pem /etc/ssl/certs/51391683.0"
	I0505 14:26:12.185974   56422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/542102.pem && ln -fs /usr/share/ca-certificates/542102.pem /etc/ssl/certs/542102.pem"
	I0505 14:26:12.195028   56422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/542102.pem
	I0505 14:26:12.198413   56422 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:08 /usr/share/ca-certificates/542102.pem
	I0505 14:26:12.198447   56422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/542102.pem
	I0505 14:26:12.202899   56422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/542102.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 14:26:12.211947   56422 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 14:26:12.215437   56422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0505 14:26:12.220095   56422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0505 14:26:12.224496   56422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0505 14:26:12.228707   56422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0505 14:26:12.233013   56422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0505 14:26:12.237249   56422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0505 14:26:12.241531   56422 kubeadm.go:928] updating node {m02 192.169.0.52 8443 v1.30.0 docker true true} ...
	I0505 14:26:12.241588   56422 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-671000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 14:26:12.241602   56422 kube-vip.go:111] generating kube-vip config ...
	I0505 14:26:12.241634   56422 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0505 14:26:12.255148   56422 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0505 14:26:12.255193   56422 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0505 14:26:12.255250   56422 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 14:26:12.263010   56422 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 14:26:12.263059   56422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0505 14:26:12.270286   56422 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0505 14:26:12.283827   56422 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 14:26:12.297473   56422 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1345 bytes)
	I0505 14:26:12.310887   56422 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0505 14:26:12.313651   56422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:26:12.323270   56422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:26:12.425677   56422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:26:12.439798   56422 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.169.0.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 14:26:12.460920   56422 out.go:177] * Verifying Kubernetes components...
	I0505 14:26:12.439995   56422 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:26:12.502870   56422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:26:12.635027   56422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:26:12.661698   56422 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:26:12.661911   56422 kapi.go:59] client config for ha-671000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28d3220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0505 14:26:12.661947   56422 kubeadm.go:477] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.51:8443
	I0505 14:26:12.662158   56422 node_ready.go:35] waiting up to 6m0s for node "ha-671000-m02" to be "Ready" ...
	I0505 14:26:12.662228   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:26:12.662233   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:12.662248   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:12.662251   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:21.831095   56422 round_trippers.go:574] Response Status: 200 OK in 9168 milliseconds
	I0505 14:26:21.833333   56422 node_ready.go:49] node "ha-671000-m02" has status "Ready":"True"
	I0505 14:26:21.833346   56422 node_ready.go:38] duration metric: took 9.171260477s for node "ha-671000-m02" to be "Ready" ...
	I0505 14:26:21.833352   56422 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 14:26:21.833397   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:26:21.833404   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:21.833410   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:21.833413   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:21.888407   56422 round_trippers.go:574] Response Status: 200 OK in 54 milliseconds
	I0505 14:26:21.894479   56422 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:21.894550   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:26:21.894557   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:21.894563   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:21.894568   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:21.905539   56422 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0505 14:26:21.906696   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:21.906706   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:21.906712   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:21.906716   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:21.918480   56422 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0505 14:26:21.918906   56422 pod_ready.go:92] pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace has status "Ready":"True"
	I0505 14:26:21.918920   56422 pod_ready.go:81] duration metric: took 24.420631ms for pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:21.918933   56422 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:21.918986   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kjf54
	I0505 14:26:21.918991   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:21.918997   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:21.919004   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:21.927911   56422 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0505 14:26:21.928370   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:21.928379   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:21.928392   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:21.928398   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:21.932025   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:26:21.932474   56422 pod_ready.go:92] pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace has status "Ready":"True"
	I0505 14:26:21.932490   56422 pod_ready.go:81] duration metric: took 13.547162ms for pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:21.932499   56422 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:21.932542   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000
	I0505 14:26:21.932549   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:21.932555   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:21.932560   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:21.934873   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:26:21.935421   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:21.935432   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:21.935439   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:21.935451   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:21.939103   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:26:21.939597   56422 pod_ready.go:92] pod "etcd-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:26:21.939608   56422 pod_ready.go:81] duration metric: took 7.102453ms for pod "etcd-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:21.939616   56422 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:21.939678   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000-m02
	I0505 14:26:21.939686   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:21.939694   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:21.939699   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:21.943095   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:26:21.943827   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:26:21.943836   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:21.943843   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:21.943870   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:21.951584   56422 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0505 14:26:21.951955   56422 pod_ready.go:92] pod "etcd-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:26:21.951964   56422 pod_ready.go:81] duration metric: took 12.339153ms for pod "etcd-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:21.951986   56422 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:21.952032   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000
	I0505 14:26:21.952037   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:21.952043   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:21.952048   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:21.954413   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:26:22.034612   56422 request.go:629] Waited for 79.603205ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:22.034644   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:22.034648   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:22.034653   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:22.034657   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:22.036574   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:26:22.036937   56422 pod_ready.go:92] pod "kube-apiserver-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:26:22.036946   56422 pod_ready.go:81] duration metric: took 84.955761ms for pod "kube-apiserver-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:22.036958   56422 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:22.234525   56422 request.go:629] Waited for 197.502631ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m02
	I0505 14:26:22.234666   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m02
	I0505 14:26:22.234677   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:22.234688   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:22.234695   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:22.237822   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:26:22.434867   56422 request.go:629] Waited for 196.302077ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:26:22.434983   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:26:22.434995   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:22.435007   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:22.435014   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:22.438132   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:26:22.438732   56422 pod_ready.go:92] pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:26:22.438745   56422 pod_ready.go:81] duration metric: took 401.783767ms for pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:22.438754   56422 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:22.633880   56422 request.go:629] Waited for 195.077748ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000
	I0505 14:26:22.634001   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000
	I0505 14:26:22.634014   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:22.634024   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:22.634029   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:22.638256   56422 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:26:22.834499   56422 request.go:629] Waited for 195.677287ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:22.834532   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:22.834538   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:22.834544   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:22.834548   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:22.839587   56422 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0505 14:26:22.839955   56422 pod_ready.go:92] pod "kube-controller-manager-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:26:22.839965   56422 pod_ready.go:81] duration metric: took 401.208767ms for pod "kube-controller-manager-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:22.839972   56422 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:23.033620   56422 request.go:629] Waited for 193.611971ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:26:23.033696   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:26:23.033702   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:23.033708   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:23.033711   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:23.035813   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:26:23.235401   56422 request.go:629] Waited for 199.02315ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:26:23.235459   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:26:23.235464   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:23.235470   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:23.235473   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:23.237507   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:26:23.237967   56422 pod_ready.go:92] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:26:23.237976   56422 pod_ready.go:81] duration metric: took 398.003264ms for pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:23.237990   56422 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5jwqs" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:23.434467   56422 request.go:629] Waited for 196.425595ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:26:23.434591   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:26:23.434601   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:23.434613   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:23.434620   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:23.443698   56422 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0505 14:26:23.633487   56422 request.go:629] Waited for 189.356305ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:26:23.633558   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:26:23.633564   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:23.633570   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:23.633573   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:23.635536   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:26:23.636060   56422 pod_ready.go:92] pod "kube-proxy-5jwqs" in "kube-system" namespace has status "Ready":"True"
	I0505 14:26:23.636068   56422 pod_ready.go:81] duration metric: took 398.075985ms for pod "kube-proxy-5jwqs" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:23.636075   56422 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b45s6" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:23.833454   56422 request.go:629] Waited for 197.334607ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b45s6
	I0505 14:26:23.833494   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b45s6
	I0505 14:26:23.833499   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:23.833505   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:23.833509   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:23.842022   56422 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0505 14:26:24.033427   56422 request.go:629] Waited for 190.952754ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m04
	I0505 14:26:24.033464   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m04
	I0505 14:26:24.033469   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:24.033475   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:24.033478   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:24.035701   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:26:24.036072   56422 pod_ready.go:97] node "ha-671000-m04" hosting pod "kube-proxy-b45s6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-671000-m04" has status "Ready":"Unknown"
	I0505 14:26:24.036082   56422 pod_ready.go:81] duration metric: took 400.005942ms for pod "kube-proxy-b45s6" in "kube-system" namespace to be "Ready" ...
	E0505 14:26:24.036089   56422 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-671000-m04" hosting pod "kube-proxy-b45s6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-671000-m04" has status "Ready":"Unknown"
	I0505 14:26:24.036100   56422 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kppdj" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:24.235421   56422 request.go:629] Waited for 199.280472ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:26:24.235563   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:26:24.235574   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:24.235585   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:24.235591   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:24.238925   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:26:24.434854   56422 request.go:629] Waited for 195.374621ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:24.434892   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:24.434900   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:24.434909   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:24.434916   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:24.437449   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:26:24.437976   56422 pod_ready.go:92] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"True"
	I0505 14:26:24.437985   56422 pod_ready.go:81] duration metric: took 401.882585ms for pod "kube-proxy-kppdj" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:24.437992   56422 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:24.635409   56422 request.go:629] Waited for 197.38199ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000
	I0505 14:26:24.635456   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000
	I0505 14:26:24.635465   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:24.635483   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:24.635487   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:24.637859   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:26:24.834927   56422 request.go:629] Waited for 196.712132ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:24.835032   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:24.835043   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:24.835054   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:24.835060   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:24.838798   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:26:24.839453   56422 pod_ready.go:92] pod "kube-scheduler-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:26:24.839479   56422 pod_ready.go:81] duration metric: took 401.485693ms for pod "kube-scheduler-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:24.839486   56422 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:25.033610   56422 request.go:629] Waited for 194.080157ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000-m02
	I0505 14:26:25.033714   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000-m02
	I0505 14:26:25.033722   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:25.033730   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:25.033734   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:25.035808   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:26:25.233509   56422 request.go:629] Waited for 197.18268ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:26:25.233593   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:26:25.233607   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:25.233619   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:25.233627   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:25.236639   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:26:25.237045   56422 pod_ready.go:92] pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:26:25.237055   56422 pod_ready.go:81] duration metric: took 397.567877ms for pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:25.237063   56422 pod_ready.go:38] duration metric: took 3.403737228s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 14:26:25.237084   56422 api_server.go:52] waiting for apiserver process to appear ...
	I0505 14:26:25.237136   56422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:26:25.249318   56422 api_server.go:72] duration metric: took 12.809617715s to wait for apiserver process to appear ...
	I0505 14:26:25.249328   56422 api_server.go:88] waiting for apiserver healthz status ...
	I0505 14:26:25.249341   56422 api_server.go:253] Checking apiserver healthz at https://192.169.0.51:8443/healthz ...
	I0505 14:26:25.253405   56422 api_server.go:279] https://192.169.0.51:8443/healthz returned 200:
	ok
	I0505 14:26:25.253438   56422 round_trippers.go:463] GET https://192.169.0.51:8443/version
	I0505 14:26:25.253442   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:25.253448   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:25.253459   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:25.254070   56422 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0505 14:26:25.254214   56422 api_server.go:141] control plane version: v1.30.0
	I0505 14:26:25.254224   56422 api_server.go:131] duration metric: took 4.89179ms to wait for apiserver health ...
	I0505 14:26:25.254229   56422 system_pods.go:43] waiting for kube-system pods to appear ...
	I0505 14:26:25.433655   56422 request.go:629] Waited for 179.387834ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:26:25.433711   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:26:25.433722   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:25.433733   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:25.433739   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:25.439414   56422 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0505 14:26:25.443163   56422 system_pods.go:59] 19 kube-system pods found
	I0505 14:26:25.443180   56422 system_pods.go:61] "coredns-7db6d8ff4d-hqtd2" [e76b43f2-8189-4e5d-adc3-ced554e9ee07] Running
	I0505 14:26:25.443184   56422 system_pods.go:61] "coredns-7db6d8ff4d-kjf54" [c780145e-9d82-4451-94e8-dee09a63eadb] Running
	I0505 14:26:25.443189   56422 system_pods.go:61] "etcd-ha-671000" [ea35bd5e-5a34-48e9-a9b7-4b200b88fb13] Running
	I0505 14:26:25.443192   56422 system_pods.go:61] "etcd-ha-671000-m02" [15f721f6-9618-44f4-9160-dbf9f0a41f73] Running
	I0505 14:26:25.443195   56422 system_pods.go:61] "kindnet-ffg2p" [043d485d-6127-4de3-9e4f-cfbf554fa987] Running
	I0505 14:26:25.443197   56422 system_pods.go:61] "kindnet-kn94d" [863e615e-f22a-4d15-8510-4f5c7a42b8cd] Running
	I0505 14:26:25.443200   56422 system_pods.go:61] "kindnet-zvz9x" [17260177-9933-46e9-85d2-86fe51806c25] Running
	I0505 14:26:25.443203   56422 system_pods.go:61] "kube-apiserver-ha-671000" [a6f2585c-aec7-4ba9-aa78-e22c55a798ea] Running
	I0505 14:26:25.443205   56422 system_pods.go:61] "kube-apiserver-ha-671000-m02" [bbb10014-5b6a-4377-8e62-a26e6925ee07] Running
	I0505 14:26:25.443208   56422 system_pods.go:61] "kube-controller-manager-ha-671000" [9f4e9073-99da-4ed5-8b5f-72106d630807] Running
	I0505 14:26:25.443211   56422 system_pods.go:61] "kube-controller-manager-ha-671000-m02" [074a2d58-b5a5-4fd7-8c03-c1a357ee0c4f] Running
	I0505 14:26:25.443213   56422 system_pods.go:61] "kube-proxy-5jwqs" [72f1cbf9-ca3e-4354-a8f8-7239c77af74a] Running
	I0505 14:26:25.443216   56422 system_pods.go:61] "kube-proxy-b45s6" [4d403d96-8102-44d7-a76f-3a64f30a7132] Running
	I0505 14:26:25.443218   56422 system_pods.go:61] "kube-proxy-kppdj" [5b47d66e-31b1-4892-85ef-0c3ad3bec4cb] Running
	I0505 14:26:25.443221   56422 system_pods.go:61] "kube-scheduler-ha-671000" [1ae249c2-7cd6-4c14-80aa-1d88d491dfc2] Running
	I0505 14:26:25.443224   56422 system_pods.go:61] "kube-scheduler-ha-671000-m02" [27dd9d5f-8e2a-4597-b743-0c79fa5df5b1] Running
	I0505 14:26:25.443226   56422 system_pods.go:61] "kube-vip-ha-671000" [dcc7956d-0333-45ed-afff-b8429485ef9a] Running
	I0505 14:26:25.443229   56422 system_pods.go:61] "kube-vip-ha-671000-m02" [9a09b965-61cb-4026-9bcd-0daa29f18c86] Running
	I0505 14:26:25.443236   56422 system_pods.go:61] "storage-provisioner" [f376315c-5f9b-46f4-b295-6d7d025063bc] Running
	I0505 14:26:25.443241   56422 system_pods.go:74] duration metric: took 189.010247ms to wait for pod list to return data ...
	I0505 14:26:25.443247   56422 default_sa.go:34] waiting for default service account to be created ...
	I0505 14:26:25.634693   56422 request.go:629] Waited for 191.404579ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/default/serviceaccounts
	I0505 14:26:25.634786   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/default/serviceaccounts
	I0505 14:26:25.634797   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:25.634808   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:25.634842   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:25.638246   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:26:25.638413   56422 default_sa.go:45] found service account: "default"
	I0505 14:26:25.638426   56422 default_sa.go:55] duration metric: took 195.175668ms for default service account to be created ...
	I0505 14:26:25.638433   56422 system_pods.go:116] waiting for k8s-apps to be running ...
	I0505 14:26:25.834373   56422 request.go:629] Waited for 195.897418ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:26:25.834423   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:26:25.834431   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:25.834443   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:25.834448   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:25.839772   56422 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0505 14:26:25.843606   56422 system_pods.go:86] 19 kube-system pods found
	I0505 14:26:25.843617   56422 system_pods.go:89] "coredns-7db6d8ff4d-hqtd2" [e76b43f2-8189-4e5d-adc3-ced554e9ee07] Running
	I0505 14:26:25.843622   56422 system_pods.go:89] "coredns-7db6d8ff4d-kjf54" [c780145e-9d82-4451-94e8-dee09a63eadb] Running
	I0505 14:26:25.843625   56422 system_pods.go:89] "etcd-ha-671000" [ea35bd5e-5a34-48e9-a9b7-4b200b88fb13] Running
	I0505 14:26:25.843628   56422 system_pods.go:89] "etcd-ha-671000-m02" [15f721f6-9618-44f4-9160-dbf9f0a41f73] Running
	I0505 14:26:25.843631   56422 system_pods.go:89] "kindnet-ffg2p" [043d485d-6127-4de3-9e4f-cfbf554fa987] Running
	I0505 14:26:25.843634   56422 system_pods.go:89] "kindnet-kn94d" [863e615e-f22a-4d15-8510-4f5c7a42b8cd] Running
	I0505 14:26:25.843641   56422 system_pods.go:89] "kindnet-zvz9x" [17260177-9933-46e9-85d2-86fe51806c25] Running
	I0505 14:26:25.843645   56422 system_pods.go:89] "kube-apiserver-ha-671000" [a6f2585c-aec7-4ba9-aa78-e22c55a798ea] Running
	I0505 14:26:25.843648   56422 system_pods.go:89] "kube-apiserver-ha-671000-m02" [bbb10014-5b6a-4377-8e62-a26e6925ee07] Running
	I0505 14:26:25.843651   56422 system_pods.go:89] "kube-controller-manager-ha-671000" [9f4e9073-99da-4ed5-8b5f-72106d630807] Running
	I0505 14:26:25.843654   56422 system_pods.go:89] "kube-controller-manager-ha-671000-m02" [074a2d58-b5a5-4fd7-8c03-c1a357ee0c4f] Running
	I0505 14:26:25.843657   56422 system_pods.go:89] "kube-proxy-5jwqs" [72f1cbf9-ca3e-4354-a8f8-7239c77af74a] Running
	I0505 14:26:25.843661   56422 system_pods.go:89] "kube-proxy-b45s6" [4d403d96-8102-44d7-a76f-3a64f30a7132] Running
	I0505 14:26:25.843664   56422 system_pods.go:89] "kube-proxy-kppdj" [5b47d66e-31b1-4892-85ef-0c3ad3bec4cb] Running
	I0505 14:26:25.843667   56422 system_pods.go:89] "kube-scheduler-ha-671000" [1ae249c2-7cd6-4c14-80aa-1d88d491dfc2] Running
	I0505 14:26:25.843672   56422 system_pods.go:89] "kube-scheduler-ha-671000-m02" [27dd9d5f-8e2a-4597-b743-0c79fa5df5b1] Running
	I0505 14:26:25.843675   56422 system_pods.go:89] "kube-vip-ha-671000" [dcc7956d-0333-45ed-afff-b8429485ef9a] Running
	I0505 14:26:25.843679   56422 system_pods.go:89] "kube-vip-ha-671000-m02" [9a09b965-61cb-4026-9bcd-0daa29f18c86] Running
	I0505 14:26:25.843696   56422 system_pods.go:89] "storage-provisioner" [f376315c-5f9b-46f4-b295-6d7d025063bc] Running
	I0505 14:26:25.843704   56422 system_pods.go:126] duration metric: took 205.26819ms to wait for k8s-apps to be running ...
	I0505 14:26:25.843710   56422 system_svc.go:44] waiting for kubelet service to be running ....
	I0505 14:26:25.843759   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 14:26:25.855272   56422 system_svc.go:56] duration metric: took 11.558054ms WaitForService to wait for kubelet
	I0505 14:26:25.855288   56422 kubeadm.go:576] duration metric: took 13.4155948s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:26:25.855305   56422 node_conditions.go:102] verifying NodePressure condition ...
	I0505 14:26:26.033891   56422 request.go:629] Waited for 178.541892ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes
	I0505 14:26:26.033974   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes
	I0505 14:26:26.033986   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:26.033997   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:26.034003   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:26.037378   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:26:26.038142   56422 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 14:26:26.038155   56422 node_conditions.go:123] node cpu capacity is 2
	I0505 14:26:26.038164   56422 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 14:26:26.038167   56422 node_conditions.go:123] node cpu capacity is 2
	I0505 14:26:26.038171   56422 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 14:26:26.038173   56422 node_conditions.go:123] node cpu capacity is 2
	I0505 14:26:26.038177   56422 node_conditions.go:105] duration metric: took 182.869724ms to run NodePressure ...
	I0505 14:26:26.038185   56422 start.go:240] waiting for startup goroutines ...
	I0505 14:26:26.038204   56422 start.go:254] writing updated cluster config ...
	I0505 14:26:26.059289   56422 out.go:177] 
	I0505 14:26:26.096087   56422 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:26:26.096214   56422 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:26:26.117852   56422 out.go:177] * Starting "ha-671000-m04" worker node in "ha-671000" cluster
	I0505 14:26:26.159742   56422 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 14:26:26.159770   56422 cache.go:56] Caching tarball of preloaded images
	I0505 14:26:26.159936   56422 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0505 14:26:26.159954   56422 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 14:26:26.160078   56422 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:26:26.160897   56422 start.go:360] acquireMachinesLock for ha-671000-m04: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 14:26:26.160972   56422 start.go:364] duration metric: took 57.439µs to acquireMachinesLock for "ha-671000-m04"
	I0505 14:26:26.160999   56422 start.go:96] Skipping create...Using existing machine configuration
	I0505 14:26:26.161006   56422 fix.go:54] fixHost starting: m04
	I0505 14:26:26.161288   56422 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:26:26.161315   56422 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:26:26.170613   56422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58136
	I0505 14:26:26.170967   56422 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:26:26.171362   56422 main.go:141] libmachine: Using API Version  1
	I0505 14:26:26.171381   56422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:26:26.171620   56422 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:26:26.171726   56422 main.go:141] libmachine: (ha-671000-m04) Calling .DriverName
	I0505 14:26:26.171803   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetState
	I0505 14:26:26.171883   56422 main.go:141] libmachine: (ha-671000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:26:26.171960   56422 main.go:141] libmachine: (ha-671000-m04) DBG | hyperkit pid from json: 55847
	I0505 14:26:26.172854   56422 main.go:141] libmachine: (ha-671000-m04) DBG | hyperkit pid 55847 missing from process table
	I0505 14:26:26.172885   56422 fix.go:112] recreateIfNeeded on ha-671000-m04: state=Stopped err=<nil>
	I0505 14:26:26.172895   56422 main.go:141] libmachine: (ha-671000-m04) Calling .DriverName
	W0505 14:26:26.172975   56422 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 14:26:26.193864   56422 out.go:177] * Restarting existing hyperkit VM for "ha-671000-m04" ...
	I0505 14:26:26.214754   56422 main.go:141] libmachine: (ha-671000-m04) Calling .Start
	I0505 14:26:26.214949   56422 main.go:141] libmachine: (ha-671000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:26:26.214989   56422 main.go:141] libmachine: (ha-671000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/hyperkit.pid
	I0505 14:26:26.215071   56422 main.go:141] libmachine: (ha-671000-m04) DBG | Using UUID 8d0fd528-ba32-44c8-aaa8-77d77d483dce
	I0505 14:26:26.243274   56422 main.go:141] libmachine: (ha-671000-m04) DBG | Generated MAC f6:fa:b5:fe:20:2f
	I0505 14:26:26.243297   56422 main.go:141] libmachine: (ha-671000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000
	I0505 14:26:26.243421   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8d0fd528-ba32-44c8-aaa8-77d77d483dce", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:26:26.243458   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8d0fd528-ba32-44c8-aaa8-77d77d483dce", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00029b650)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0505 14:26:26.243545   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8d0fd528-ba32-44c8-aaa8-77d77d483dce", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/ha-671000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/
machines/ha-671000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"}
	I0505 14:26:26.243579   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8d0fd528-ba32-44c8-aaa8-77d77d483dce -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/ha-671000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-671000"
	I0505 14:26:26.243596   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0505 14:26:26.244964   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 DEBUG: hyperkit: Pid is 56456
	I0505 14:26:26.245443   56422 main.go:141] libmachine: (ha-671000-m04) DBG | Attempt 0
	I0505 14:26:26.245453   56422 main.go:141] libmachine: (ha-671000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:26:26.245513   56422 main.go:141] libmachine: (ha-671000-m04) DBG | hyperkit pid from json: 56456
	I0505 14:26:26.246577   56422 main.go:141] libmachine: (ha-671000-m04) DBG | Searching for f6:fa:b5:fe:20:2f in /var/db/dhcpd_leases ...
	I0505 14:26:26.246681   56422 main.go:141] libmachine: (ha-671000-m04) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0505 14:26:26.246693   56422 main.go:141] libmachine: (ha-671000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394aea}
	I0505 14:26:26.246723   56422 main.go:141] libmachine: (ha-671000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x66394ad8}
	I0505 14:26:26.246742   56422 main.go:141] libmachine: (ha-671000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x66394a07}
	I0505 14:26:26.246754   56422 main.go:141] libmachine: (ha-671000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x6637f817}
	I0505 14:26:26.246765   56422 main.go:141] libmachine: (ha-671000-m04) DBG | Found match: f6:fa:b5:fe:20:2f
	I0505 14:26:26.246781   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetConfigRaw
	I0505 14:26:26.246783   56422 main.go:141] libmachine: (ha-671000-m04) DBG | IP: 192.169.0.54
	I0505 14:26:26.247502   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetIP
	I0505 14:26:26.247720   56422 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/config.json ...
	I0505 14:26:26.248196   56422 machine.go:94] provisionDockerMachine start ...
	I0505 14:26:26.248206   56422 main.go:141] libmachine: (ha-671000-m04) Calling .DriverName
	I0505 14:26:26.248321   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHHostname
	I0505 14:26:26.248415   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHPort
	I0505 14:26:26.248536   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:26.248642   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:26.248735   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHUsername
	I0505 14:26:26.248880   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:26:26.249059   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0505 14:26:26.249067   56422 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 14:26:26.253372   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0505 14:26:26.261458   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0505 14:26:26.262400   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:26:26.262420   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:26:26.262443   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:26:26.262464   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:26:26.648722   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0505 14:26:26.648742   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0505 14:26:26.763438   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 14:26:26.763457   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 14:26:26.763467   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 14:26:26.763474   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 14:26:26.764316   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0505 14:26:26.764328   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0505 14:26:32.140290   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:32 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0505 14:26:32.140309   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:32 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0505 14:26:32.140319   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:32 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0505 14:26:32.163761   56422 main.go:141] libmachine: (ha-671000-m04) DBG | 2024/05/05 14:26:32 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0505 14:26:45.316721   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0505 14:26:45.316739   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetMachineName
	I0505 14:26:45.316864   56422 buildroot.go:166] provisioning hostname "ha-671000-m04"
	I0505 14:26:45.316873   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetMachineName
	I0505 14:26:45.316960   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHHostname
	I0505 14:26:45.317034   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHPort
	I0505 14:26:45.317113   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:45.317201   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:45.317294   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHUsername
	I0505 14:26:45.317443   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:26:45.317578   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0505 14:26:45.317589   56422 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-671000-m04 && echo "ha-671000-m04" | sudo tee /etc/hostname
	I0505 14:26:45.387405   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-671000-m04
	
	I0505 14:26:45.387424   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHHostname
	I0505 14:26:45.387563   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHPort
	I0505 14:26:45.387665   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:45.387760   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:45.387862   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHUsername
	I0505 14:26:45.387995   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:26:45.388136   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0505 14:26:45.388148   56422 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-671000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-671000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-671000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 14:26:45.454390   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 14:26:45.454407   56422 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
	I0505 14:26:45.454431   56422 buildroot.go:174] setting up certificates
	I0505 14:26:45.454438   56422 provision.go:84] configureAuth start
	I0505 14:26:45.454446   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetMachineName
	I0505 14:26:45.454575   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetIP
	I0505 14:26:45.454676   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHHostname
	I0505 14:26:45.454754   56422 provision.go:143] copyHostCerts
	I0505 14:26:45.454784   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:26:45.454834   56422 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
	I0505 14:26:45.454839   56422 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 14:26:45.454979   56422 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
	I0505 14:26:45.455186   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:26:45.455223   56422 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
	I0505 14:26:45.455228   56422 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 14:26:45.455298   56422 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
	I0505 14:26:45.455433   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:26:45.455462   56422 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
	I0505 14:26:45.455471   56422 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 14:26:45.455541   56422 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
	I0505 14:26:45.455678   56422 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.ha-671000-m04 san=[127.0.0.1 192.169.0.54 ha-671000-m04 localhost minikube]
	I0505 14:26:45.629808   56422 provision.go:177] copyRemoteCerts
	I0505 14:26:45.629903   56422 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 14:26:45.629916   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHHostname
	I0505 14:26:45.630116   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHPort
	I0505 14:26:45.630335   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:45.630428   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHUsername
	I0505 14:26:45.630516   56422 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/id_rsa Username:docker}
	I0505 14:26:45.666454   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0505 14:26:45.666527   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 14:26:45.686264   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0505 14:26:45.686340   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0505 14:26:45.706006   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0505 14:26:45.706097   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0505 14:26:45.725580   56422 provision.go:87] duration metric: took 271.135932ms to configureAuth
	I0505 14:26:45.725594   56422 buildroot.go:189] setting minikube options for container-runtime
	I0505 14:26:45.725767   56422 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:26:45.725779   56422 main.go:141] libmachine: (ha-671000-m04) Calling .DriverName
	I0505 14:26:45.725911   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHHostname
	I0505 14:26:45.726026   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHPort
	I0505 14:26:45.726114   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:45.726205   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:45.726291   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHUsername
	I0505 14:26:45.726415   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:26:45.726543   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0505 14:26:45.726551   56422 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 14:26:45.785885   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 14:26:45.785898   56422 buildroot.go:70] root file system type: tmpfs
	I0505 14:26:45.785984   56422 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 14:26:45.785999   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHHostname
	I0505 14:26:45.786120   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHPort
	I0505 14:26:45.786220   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:45.786316   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:45.786394   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHUsername
	I0505 14:26:45.786529   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:26:45.786679   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0505 14:26:45.786725   56422 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.51"
	Environment="NO_PROXY=192.169.0.51,192.169.0.52"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 14:26:45.855829   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.51
	Environment=NO_PROXY=192.169.0.51,192.169.0.52
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 14:26:45.855846   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHHostname
	I0505 14:26:45.856021   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHPort
	I0505 14:26:45.856099   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:45.856195   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:45.856302   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHUsername
	I0505 14:26:45.856429   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:26:45.856576   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0505 14:26:45.856588   56422 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 14:26:47.394967   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0505 14:26:47.394982   56422 machine.go:97] duration metric: took 21.14697947s to provisionDockerMachine
	I0505 14:26:47.394990   56422 start.go:293] postStartSetup for "ha-671000-m04" (driver="hyperkit")
	I0505 14:26:47.394998   56422 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 14:26:47.395012   56422 main.go:141] libmachine: (ha-671000-m04) Calling .DriverName
	I0505 14:26:47.395217   56422 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 14:26:47.395231   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHHostname
	I0505 14:26:47.395326   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHPort
	I0505 14:26:47.395411   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:47.395502   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHUsername
	I0505 14:26:47.395592   56422 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/id_rsa Username:docker}
	I0505 14:26:47.431497   56422 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 14:26:47.434539   56422 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 14:26:47.434549   56422 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
	I0505 14:26:47.434629   56422 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
	I0505 14:26:47.434767   56422 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
	I0505 14:26:47.434773   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /etc/ssl/certs/542102.pem
	I0505 14:26:47.434928   56422 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 14:26:47.442125   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:26:47.461921   56422 start.go:296] duration metric: took 66.923305ms for postStartSetup
	I0505 14:26:47.461944   56422 main.go:141] libmachine: (ha-671000-m04) Calling .DriverName
	I0505 14:26:47.462123   56422 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0505 14:26:47.462137   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHHostname
	I0505 14:26:47.462220   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHPort
	I0505 14:26:47.462306   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:47.462387   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHUsername
	I0505 14:26:47.462468   56422 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/id_rsa Username:docker}
	I0505 14:26:47.499076   56422 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0505 14:26:47.499145   56422 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0505 14:26:47.549633   56422 fix.go:56] duration metric: took 21.38882755s for fixHost
	I0505 14:26:47.549664   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHHostname
	I0505 14:26:47.549794   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHPort
	I0505 14:26:47.549884   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:47.549979   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:47.550067   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHUsername
	I0505 14:26:47.550192   56422 main.go:141] libmachine: Using SSH client type: native
	I0505 14:26:47.550335   56422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x142db80] 0x14308e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0505 14:26:47.550342   56422 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 14:26:47.609984   56422 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714944407.769672983
	
	I0505 14:26:47.609996   56422 fix.go:216] guest clock: 1714944407.769672983
	I0505 14:26:47.610002   56422 fix.go:229] Guest: 2024-05-05 14:26:47.769672983 -0700 PDT Remote: 2024-05-05 14:26:47.549652 -0700 PDT m=+72.429641444 (delta=220.020983ms)
	I0505 14:26:47.610012   56422 fix.go:200] guest clock delta is within tolerance: 220.020983ms
	I0505 14:26:47.610015   56422 start.go:83] releasing machines lock for "ha-671000-m04", held for 21.449240442s
	I0505 14:26:47.610038   56422 main.go:141] libmachine: (ha-671000-m04) Calling .DriverName
	I0505 14:26:47.610169   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetIP
	I0505 14:26:47.631632   56422 out.go:177] * Found network options:
	I0505 14:26:47.653487   56422 out.go:177]   - NO_PROXY=192.169.0.51,192.169.0.52
	W0505 14:26:47.674341   56422 proxy.go:119] fail to check proxy env: Error ip not in block
	W0505 14:26:47.674366   56422 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 14:26:47.674384   56422 main.go:141] libmachine: (ha-671000-m04) Calling .DriverName
	I0505 14:26:47.675021   56422 main.go:141] libmachine: (ha-671000-m04) Calling .DriverName
	I0505 14:26:47.675202   56422 main.go:141] libmachine: (ha-671000-m04) Calling .DriverName
	I0505 14:26:47.675324   56422 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 14:26:47.675366   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHHostname
	W0505 14:26:47.675442   56422 proxy.go:119] fail to check proxy env: Error ip not in block
	W0505 14:26:47.675465   56422 proxy.go:119] fail to check proxy env: Error ip not in block
	I0505 14:26:47.675558   56422 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0505 14:26:47.675582   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHPort
	I0505 14:26:47.675583   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHHostname
	I0505 14:26:47.675821   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:47.675847   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHPort
	I0505 14:26:47.675962   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHUsername
	I0505 14:26:47.675971   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:26:47.676096   56422 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/id_rsa Username:docker}
	I0505 14:26:47.676115   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHUsername
	I0505 14:26:47.676232   56422 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/id_rsa Username:docker}
	W0505 14:26:47.709012   56422 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 14:26:47.709086   56422 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 14:26:47.759107   56422 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 14:26:47.759122   56422 start.go:494] detecting cgroup driver to use...
	I0505 14:26:47.759191   56422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:26:47.774327   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0505 14:26:47.783455   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 14:26:47.792548   56422 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 14:26:47.792605   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 14:26:47.801559   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:26:47.810491   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 14:26:47.819273   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 14:26:47.828915   56422 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 14:26:47.838387   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 14:26:47.847369   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 14:26:47.856385   56422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 14:26:47.865417   56422 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 14:26:47.873544   56422 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 14:26:47.881706   56422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:26:47.990316   56422 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 14:26:48.009637   56422 start.go:494] detecting cgroup driver to use...
	I0505 14:26:48.009706   56422 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 14:26:48.023769   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:26:48.037168   56422 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 14:26:48.051617   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 14:26:48.061573   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:26:48.072059   56422 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0505 14:26:48.094007   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 14:26:48.104386   56422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 14:26:48.119453   56422 ssh_runner.go:195] Run: which cri-dockerd
	I0505 14:26:48.122411   56422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 14:26:48.129565   56422 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 14:26:48.144850   56422 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 14:26:48.248697   56422 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 14:26:48.342203   56422 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 14:26:48.342231   56422 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 14:26:48.357063   56422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:26:48.460922   56422 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 14:26:50.727502   56422 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.266583103s)
	I0505 14:26:50.727568   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0505 14:26:50.738697   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:26:50.748869   56422 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0505 14:26:50.840584   56422 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0505 14:26:50.946697   56422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:26:51.043713   56422 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0505 14:26:51.056427   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 14:26:51.067330   56422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:26:51.159630   56422 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0505 14:26:51.231212   56422 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0505 14:26:51.231292   56422 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0505 14:26:51.235554   56422 start.go:562] Will wait 60s for crictl version
	I0505 14:26:51.235604   56422 ssh_runner.go:195] Run: which crictl
	I0505 14:26:51.241096   56422 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 14:26:51.270395   56422 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0505 14:26:51.270475   56422 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:26:51.289757   56422 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 14:26:51.329670   56422 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0505 14:26:51.405227   56422 out.go:177]   - env NO_PROXY=192.169.0.51
	I0505 14:26:51.480156   56422 out.go:177]   - env NO_PROXY=192.169.0.51,192.169.0.52
	I0505 14:26:51.517301   56422 main.go:141] libmachine: (ha-671000-m04) Calling .GetIP
	I0505 14:26:51.517570   56422 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0505 14:26:51.521073   56422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:26:51.531846   56422 mustload.go:65] Loading cluster: ha-671000
	I0505 14:26:51.532055   56422 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:26:51.532288   56422 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:26:51.532313   56422 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:26:51.541609   56422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58158
	I0505 14:26:51.541995   56422 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:26:51.542418   56422 main.go:141] libmachine: Using API Version  1
	I0505 14:26:51.542443   56422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:26:51.542668   56422 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:26:51.542783   56422 main.go:141] libmachine: (ha-671000) Calling .GetState
	I0505 14:26:51.542865   56422 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:26:51.542957   56422 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 56435
	I0505 14:26:51.543962   56422 host.go:66] Checking if "ha-671000" exists ...
	I0505 14:26:51.544242   56422 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:26:51.544275   56422 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:26:51.553440   56422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58160
	I0505 14:26:51.553791   56422 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:26:51.554181   56422 main.go:141] libmachine: Using API Version  1
	I0505 14:26:51.554196   56422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:26:51.554400   56422 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:26:51.554513   56422 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:26:51.554613   56422 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000 for IP: 192.169.0.54
	I0505 14:26:51.554624   56422 certs.go:194] generating shared ca certs ...
	I0505 14:26:51.554633   56422 certs.go:226] acquiring lock for ca certs: {Name:mk4a4c4cb11dfd06f304e9c6007de9e5e149a466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 14:26:51.554787   56422 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key
	I0505 14:26:51.554838   56422 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key
	I0505 14:26:51.554849   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0505 14:26:51.554878   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0505 14:26:51.554897   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0505 14:26:51.554915   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0505 14:26:51.555004   56422 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem (1338 bytes)
	W0505 14:26:51.555052   56422 certs.go:480] ignoring /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210_empty.pem, impossibly tiny 0 bytes
	I0505 14:26:51.555062   56422 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 14:26:51.555103   56422 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem (1078 bytes)
	I0505 14:26:51.555137   56422 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem (1123 bytes)
	I0505 14:26:51.555166   56422 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem (1679 bytes)
	I0505 14:26:51.555236   56422 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem (1708 bytes)
	I0505 14:26:51.555272   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem -> /usr/share/ca-certificates/54210.pem
	I0505 14:26:51.555292   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> /usr/share/ca-certificates/542102.pem
	I0505 14:26:51.555310   56422 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:26:51.555342   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 14:26:51.577103   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 14:26:51.598673   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 14:26:51.620166   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 14:26:51.641727   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem --> /usr/share/ca-certificates/54210.pem (1338 bytes)
	I0505 14:26:51.663703   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /usr/share/ca-certificates/542102.pem (1708 bytes)
	I0505 14:26:51.683706   56422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 14:26:51.703959   56422 ssh_runner.go:195] Run: openssl version
	I0505 14:26:51.708364   56422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 14:26:51.716882   56422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:26:51.720505   56422 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:26:51.720559   56422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 14:26:51.725089   56422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 14:26:51.734144   56422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54210.pem && ln -fs /usr/share/ca-certificates/54210.pem /etc/ssl/certs/54210.pem"
	I0505 14:26:51.743152   56422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54210.pem
	I0505 14:26:51.746658   56422 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:08 /usr/share/ca-certificates/54210.pem
	I0505 14:26:51.746706   56422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54210.pem
	I0505 14:26:51.751105   56422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/54210.pem /etc/ssl/certs/51391683.0"
	I0505 14:26:51.759611   56422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/542102.pem && ln -fs /usr/share/ca-certificates/542102.pem /etc/ssl/certs/542102.pem"
	I0505 14:26:51.768353   56422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/542102.pem
	I0505 14:26:51.771815   56422 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:08 /usr/share/ca-certificates/542102.pem
	I0505 14:26:51.771856   56422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/542102.pem
	I0505 14:26:51.776077   56422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/542102.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 14:26:51.784575   56422 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 14:26:51.787808   56422 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0505 14:26:51.787847   56422 kubeadm.go:928] updating node {m04 192.169.0.54 0 v1.30.0 docker false true} ...
	I0505 14:26:51.787913   56422 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-671000-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-671000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 14:26:51.787956   56422 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 14:26:51.795579   56422 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 14:26:51.795632   56422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0505 14:26:51.802962   56422 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0505 14:26:51.816838   56422 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 14:26:51.831180   56422 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0505 14:26:51.834262   56422 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 14:26:51.843984   56422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:26:51.943950   56422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:26:51.958969   56422 start.go:234] Will wait 6m0s for node &{Name:m04 IP:192.169.0.54 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0505 14:26:51.981043   56422 out.go:177] * Verifying Kubernetes components...
	I0505 14:26:51.959155   56422 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:26:52.021012   56422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 14:26:52.141811   56422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 14:26:52.160182   56422 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:26:52.160395   56422 kapi.go:59] client config for ha-671000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/ha-671000/client.key", CAFile:"/Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28d3220), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0505 14:26:52.160435   56422 kubeadm.go:477] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.51:8443
	I0505 14:26:52.160600   56422 node_ready.go:35] waiting up to 6m0s for node "ha-671000-m04" to be "Ready" ...
	I0505 14:26:52.160643   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m04
	I0505 14:26:52.160647   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:52.160653   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:52.160655   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:52.162795   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:26:52.661862   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m04
	I0505 14:26:52.661929   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:52.661937   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:52.661943   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:52.669216   56422 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0505 14:26:52.669778   56422 node_ready.go:49] node "ha-671000-m04" has status "Ready":"True"
	I0505 14:26:52.669789   56422 node_ready.go:38] duration metric: took 509.185433ms for node "ha-671000-m04" to be "Ready" ...
	I0505 14:26:52.669798   56422 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 14:26:52.669847   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods
	I0505 14:26:52.669854   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:52.669860   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:52.669863   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:52.675981   56422 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0505 14:26:52.680087   56422 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace to be "Ready" ...
	I0505 14:26:52.680155   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:26:52.680161   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:52.680167   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:52.680170   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:52.686011   56422 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0505 14:26:52.687550   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:52.687562   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:52.687568   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:52.687571   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:52.692328   56422 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:26:53.180251   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:26:53.180268   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:53.180276   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:53.180280   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:53.183653   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:26:53.184042   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:53.184050   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:53.184055   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:53.184059   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:53.186138   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:26:53.680819   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:26:53.680835   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:53.680842   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:53.680846   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:53.683445   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:26:53.684023   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:53.684036   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:53.684042   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:53.684046   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:53.685997   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:26:54.180223   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:26:54.180239   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:54.180245   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:54.180248   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:54.182624   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:26:54.183262   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:54.183270   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:54.183275   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:54.183279   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:54.185126   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:26:54.681962   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:26:54.681987   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:54.681999   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:54.682006   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:54.687505   56422 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0505 14:26:54.689777   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:54.689785   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:54.689791   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:54.689795   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:54.693119   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:26:54.693528   56422 pod_ready.go:102] pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace has status "Ready":"False"
	I0505 14:26:55.181446   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:26:55.190208   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:55.190215   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:55.190220   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:55.192302   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:26:55.192817   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:55.192825   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:55.192829   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:55.192843   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:55.194450   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:26:55.680404   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:26:55.680419   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:55.680460   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:55.680463   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:55.682844   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:26:55.683295   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:55.683303   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:55.683308   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:55.683312   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:55.684943   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:26:56.180687   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:26:56.180699   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:56.180705   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:56.180708   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:56.183106   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:26:56.183737   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:56.183745   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:56.183751   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:56.183754   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:56.185527   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:26:56.680527   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:26:56.680552   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:56.680563   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:56.680570   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:56.684340   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:26:56.684933   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:56.684941   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:56.684946   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:56.684949   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:56.686904   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:26:57.180264   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:26:57.180289   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:57.180302   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:57.180307   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:57.183572   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:26:57.184225   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:57.184233   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:57.184238   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:57.184253   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:57.186235   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:26:57.186583   56422 pod_ready.go:102] pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace has status "Ready":"False"
	I0505 14:26:57.680687   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:26:57.680714   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:57.680791   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:57.680801   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:57.684036   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:26:57.684759   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:57.684766   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:57.684771   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:57.684775   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:57.686721   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:26:58.181827   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:26:58.181844   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:58.181851   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:58.181854   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:58.183994   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:26:58.184524   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:58.184531   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:58.184536   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:58.184541   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:58.186272   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:26:58.681009   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:26:58.681070   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:58.681084   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:58.681094   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:58.684418   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:26:58.684995   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:58.685006   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:58.685014   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:58.685019   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:58.687076   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:26:59.181252   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:26:59.181276   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:59.181287   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:59.181294   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:59.184278   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:26:59.184764   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:59.184770   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:59.184776   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:59.184779   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:59.186504   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:26:59.186864   56422 pod_ready.go:102] pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace has status "Ready":"False"
	I0505 14:26:59.681303   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:26:59.681327   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:59.681399   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:59.681406   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:59.685374   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:26:59.685958   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:26:59.685966   56422 round_trippers.go:469] Request Headers:
	I0505 14:26:59.685972   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:26:59.685976   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:26:59.687774   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:00.180657   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:00.184885   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:00.184901   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:00.184906   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:00.187100   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:00.187644   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:00.187651   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:00.187657   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:00.187660   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:00.189490   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:00.680922   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:00.680943   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:00.680954   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:00.680958   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:00.685001   56422 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0505 14:27:00.685627   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:00.685634   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:00.685639   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:00.685644   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:00.687252   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:01.180319   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:01.180338   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:01.180349   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:01.180357   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:01.184083   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:01.184480   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:01.184488   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:01.184493   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:01.184509   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:01.186139   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:01.681292   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:01.681308   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:01.681317   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:01.681323   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:01.684194   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:01.684728   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:01.684735   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:01.684740   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:01.684744   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:01.686268   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:01.686589   56422 pod_ready.go:102] pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace has status "Ready":"False"
	I0505 14:27:02.180172   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:02.180185   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:02.180192   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:02.180196   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:02.182521   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:02.182906   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:02.182917   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:02.182923   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:02.182927   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:02.184969   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:02.680467   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:02.680480   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:02.680487   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:02.680489   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:02.683484   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:02.683892   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:02.683900   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:02.683904   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:02.683907   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:02.685837   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:03.181106   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:03.181118   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:03.181124   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:03.181127   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:03.183520   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:03.184044   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:03.184052   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:03.184058   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:03.184067   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:03.185578   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:03.680436   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:03.680457   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:03.680471   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:03.680477   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:03.683698   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:03.684477   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:03.684485   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:03.684491   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:03.684500   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:03.686143   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:04.180375   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:04.180390   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:04.180398   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:04.180402   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:04.182913   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:04.183320   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:04.183328   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:04.183333   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:04.183337   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:04.184841   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:04.185193   56422 pod_ready.go:102] pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace has status "Ready":"False"
	I0505 14:27:04.680616   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:04.680668   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:04.680688   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:04.680697   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:04.684085   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:04.684766   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:04.684774   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:04.684779   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:04.684783   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:04.686636   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:05.181064   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:05.188094   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:05.188108   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:05.188114   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:05.191026   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:05.191648   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:05.191658   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:05.191669   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:05.191675   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:05.193454   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:05.680411   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:05.680431   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:05.680443   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:05.680451   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:05.683894   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:05.684674   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:05.684681   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:05.684687   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:05.684690   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:05.686352   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:06.180149   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:06.180174   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:06.180182   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:06.180185   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:06.182834   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:06.183461   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:06.183468   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:06.183473   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:06.183479   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:06.185188   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:06.185461   56422 pod_ready.go:102] pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace has status "Ready":"False"
	I0505 14:27:06.680352   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:06.680372   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:06.680384   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:06.680389   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:06.683702   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:06.684412   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:06.684421   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:06.684426   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:06.684430   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:06.686097   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:07.180286   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:07.180306   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:07.180318   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:07.180326   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:07.183622   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:07.184196   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:07.184203   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:07.184209   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:07.184212   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:07.185627   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:07.680522   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:07.680542   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:07.680556   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:07.680564   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:07.684019   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:07.684521   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:07.684529   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:07.684534   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:07.684549   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:07.686044   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:08.181672   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:08.181688   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:08.181700   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:08.181711   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:08.184053   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:08.184516   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:08.184524   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:08.184530   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:08.184532   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:08.186087   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:08.186432   56422 pod_ready.go:102] pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace has status "Ready":"False"
	I0505 14:27:08.680618   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:08.680639   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:08.680663   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:08.680671   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:08.683841   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:08.684293   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:08.684300   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:08.684306   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:08.684309   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:08.686043   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:09.180387   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:09.180408   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:09.180419   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:09.180425   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:09.182926   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:09.183538   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:09.183546   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:09.183552   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:09.183556   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:09.185071   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:09.680829   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:09.680889   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:09.680936   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:09.680949   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:09.683907   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:09.684517   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:09.684524   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:09.684529   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:09.684533   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:09.686111   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:10.180870   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:10.188146   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:10.188160   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:10.188169   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:10.191924   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:10.192453   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:10.192461   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:10.192467   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:10.192469   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:10.194261   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:10.194630   56422 pod_ready.go:102] pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace has status "Ready":"False"
	I0505 14:27:10.682043   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:10.682063   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:10.682073   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:10.682078   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:10.685396   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:10.685983   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:10.685990   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:10.685994   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:10.685997   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:10.687708   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:11.180656   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:11.180675   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:11.180687   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:11.180692   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:11.183495   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:11.184150   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:11.184157   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:11.184162   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:11.184165   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:11.185553   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:11.680883   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:11.680907   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:11.680953   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:11.680974   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:11.683968   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:11.684524   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:11.684531   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:11.684536   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:11.684539   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:11.686074   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:12.180179   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:12.180191   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:12.180197   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:12.180201   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:12.182319   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:12.182795   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:12.182802   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:12.182808   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:12.182812   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:12.184665   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:12.681371   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:12.681396   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:12.681455   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:12.681463   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:12.683865   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:12.684287   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:12.684295   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:12.684301   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:12.684304   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:12.686251   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:12.686622   56422 pod_ready.go:102] pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace has status "Ready":"False"
	I0505 14:27:13.181250   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:13.181270   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:13.181280   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:13.181288   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:13.184132   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:13.184621   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:13.184629   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:13.184635   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:13.184639   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:13.186121   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:13.681897   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:13.681912   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:13.681921   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:13.681927   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:13.684812   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:13.685277   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:13.685286   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:13.685291   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:13.685295   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:13.686856   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:14.180372   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:14.180387   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:14.180395   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:14.180400   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:14.182820   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:14.183391   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:14.183398   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:14.183404   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:14.183420   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:14.185034   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:14.681162   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:14.681186   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:14.681198   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:14.681205   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:14.684509   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:14.685321   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:14.685328   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:14.685333   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:14.685337   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:14.686916   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:14.687300   56422 pod_ready.go:102] pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace has status "Ready":"False"
	I0505 14:27:15.181562   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:15.187315   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:15.187331   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:15.187339   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:15.190692   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:15.191421   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:15.191428   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:15.191433   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:15.191436   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:15.193137   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:15.681177   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:15.681201   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:15.681212   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:15.681221   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:15.684418   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:15.684972   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:15.684983   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:15.684991   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:15.684995   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:15.686651   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:16.180839   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:16.180853   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:16.180859   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:16.180863   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:16.182714   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:16.183337   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:16.183345   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:16.183351   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:16.183354   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:16.184916   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:16.680358   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:16.680402   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:16.680412   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:16.680415   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:16.683030   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:16.683575   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:16.683582   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:16.683587   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:16.683591   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:16.685046   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:17.181135   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:17.181154   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:17.181162   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:17.181168   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:17.183819   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:17.184359   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:17.184367   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:17.184373   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:17.184377   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:17.185988   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:17.186456   56422 pod_ready.go:102] pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace has status "Ready":"False"
	I0505 14:27:17.682117   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:17.682136   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:17.682147   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:17.682154   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:17.685537   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:17.686400   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:17.686408   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:17.686413   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:17.686416   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:17.688071   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:18.181328   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hqtd2
	I0505 14:27:18.181410   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:18.181424   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:18.181430   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:18.184636   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:18.185058   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:18.185065   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:18.185071   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:18.185074   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:18.186606   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:18.186969   56422 pod_ready.go:92] pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace has status "Ready":"True"
	I0505 14:27:18.186977   56422 pod_ready.go:81] duration metric: took 25.507116934s for pod "coredns-7db6d8ff4d-hqtd2" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:18.186984   56422 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:18.187035   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kjf54
	I0505 14:27:18.187040   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:18.187046   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:18.187050   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:18.188644   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:18.189069   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:18.189076   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:18.189082   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:18.189086   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:18.190794   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:18.191099   56422 pod_ready.go:92] pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace has status "Ready":"True"
	I0505 14:27:18.191108   56422 pod_ready.go:81] duration metric: took 4.119185ms for pod "coredns-7db6d8ff4d-kjf54" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:18.191115   56422 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:18.191145   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000
	I0505 14:27:18.191150   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:18.191155   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:18.191160   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:18.192858   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:18.193201   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:18.193208   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:18.193214   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:18.193218   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:18.194741   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:18.195069   56422 pod_ready.go:92] pod "etcd-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:27:18.195077   56422 pod_ready.go:81] duration metric: took 3.957231ms for pod "etcd-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:18.195086   56422 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:18.195115   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/etcd-ha-671000-m02
	I0505 14:27:18.195120   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:18.195125   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:18.195129   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:18.196725   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:18.197053   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:27:18.197060   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:18.197065   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:18.197069   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:18.198512   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:18.198802   56422 pod_ready.go:92] pod "etcd-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:27:18.198810   56422 pod_ready.go:81] duration metric: took 3.718498ms for pod "etcd-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:18.198821   56422 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:18.198849   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000
	I0505 14:27:18.198854   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:18.198859   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:18.198868   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:18.200335   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:18.200745   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:18.200752   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:18.200756   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:18.200766   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:18.202265   56422 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0505 14:27:18.202546   56422 pod_ready.go:92] pod "kube-apiserver-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:27:18.202555   56422 pod_ready.go:81] duration metric: took 3.728803ms for pod "kube-apiserver-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:18.202561   56422 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:18.381527   56422 request.go:629] Waited for 178.930014ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m02
	I0505 14:27:18.381567   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-671000-m02
	I0505 14:27:18.381572   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:18.381604   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:18.381609   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:18.383895   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:18.582238   56422 request.go:629] Waited for 197.953006ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:27:18.582368   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:27:18.582379   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:18.582389   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:18.582396   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:18.585458   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:18.585904   56422 pod_ready.go:92] pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:27:18.585916   56422 pod_ready.go:81] duration metric: took 383.352919ms for pod "kube-apiserver-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:18.585928   56422 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:18.781372   56422 request.go:629] Waited for 195.397481ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000
	I0505 14:27:18.781425   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000
	I0505 14:27:18.781458   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:18.781466   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:18.781470   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:18.783625   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:18.982297   56422 request.go:629] Waited for 198.104508ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:18.982358   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:18.982388   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:18.982420   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:18.982425   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:18.985145   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:18.985564   56422 pod_ready.go:92] pod "kube-controller-manager-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:27:18.985573   56422 pod_ready.go:81] duration metric: took 399.642126ms for pod "kube-controller-manager-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:18.985579   56422 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:19.182566   56422 request.go:629] Waited for 196.942537ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:27:19.182657   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-671000-m02
	I0505 14:27:19.182670   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:19.182680   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:19.182685   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:19.185472   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:19.382951   56422 request.go:629] Waited for 196.758576ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:27:19.383007   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:27:19.383017   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:19.383029   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:19.383038   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:19.385821   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:19.386385   56422 pod_ready.go:92] pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:27:19.386398   56422 pod_ready.go:81] duration metric: took 400.816137ms for pod "kube-controller-manager-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:19.386407   56422 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5jwqs" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:19.582220   56422 request.go:629] Waited for 195.768558ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:27:19.582272   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5jwqs
	I0505 14:27:19.582330   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:19.582343   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:19.582351   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:19.585386   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:19.782282   56422 request.go:629] Waited for 196.30992ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:27:19.782373   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:27:19.782383   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:19.782394   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:19.782400   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:19.785634   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:19.786013   56422 pod_ready.go:92] pod "kube-proxy-5jwqs" in "kube-system" namespace has status "Ready":"True"
	I0505 14:27:19.786026   56422 pod_ready.go:81] duration metric: took 399.616336ms for pod "kube-proxy-5jwqs" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:19.786035   56422 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b45s6" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:19.981623   56422 request.go:629] Waited for 195.485464ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b45s6
	I0505 14:27:19.981697   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b45s6
	I0505 14:27:19.981706   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:19.981717   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:19.981725   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:19.985417   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:20.181984   56422 request.go:629] Waited for 196.052117ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m04
	I0505 14:27:20.187814   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m04
	I0505 14:27:20.187826   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:20.187838   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:20.187847   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:20.190792   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:20.191251   56422 pod_ready.go:92] pod "kube-proxy-b45s6" in "kube-system" namespace has status "Ready":"True"
	I0505 14:27:20.191260   56422 pod_ready.go:81] duration metric: took 405.223694ms for pod "kube-proxy-b45s6" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:20.191267   56422 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kppdj" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:20.381925   56422 request.go:629] Waited for 190.625911ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:27:20.381967   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kppdj
	I0505 14:27:20.381972   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:20.382004   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:20.382010   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:20.384354   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:20.582843   56422 request.go:629] Waited for 198.065445ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:20.582958   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:20.582969   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:20.582980   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:20.582992   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:20.586355   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:20.586774   56422 pod_ready.go:92] pod "kube-proxy-kppdj" in "kube-system" namespace has status "Ready":"True"
	I0505 14:27:20.586787   56422 pod_ready.go:81] duration metric: took 395.518109ms for pod "kube-proxy-kppdj" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:20.586800   56422 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:20.781382   56422 request.go:629] Waited for 194.541147ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000
	I0505 14:27:20.781463   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000
	I0505 14:27:20.781472   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:20.781480   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:20.781487   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:20.783828   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:20.982686   56422 request.go:629] Waited for 198.438208ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:20.982765   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000
	I0505 14:27:20.982788   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:20.982802   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:20.982810   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:20.985841   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:20.986392   56422 pod_ready.go:92] pod "kube-scheduler-ha-671000" in "kube-system" namespace has status "Ready":"True"
	I0505 14:27:20.986404   56422 pod_ready.go:81] duration metric: took 399.600727ms for pod "kube-scheduler-ha-671000" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:20.986426   56422 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:21.181336   56422 request.go:629] Waited for 194.86781ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000-m02
	I0505 14:27:21.181390   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-671000-m02
	I0505 14:27:21.181400   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:21.181410   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:21.181418   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:21.184206   56422 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0505 14:27:21.381781   56422 request.go:629] Waited for 196.988879ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:27:21.381882   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes/ha-671000-m02
	I0505 14:27:21.381894   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:21.381906   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:21.381916   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:21.384984   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:21.385382   56422 pod_ready.go:92] pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace has status "Ready":"True"
	I0505 14:27:21.385391   56422 pod_ready.go:81] duration metric: took 398.963901ms for pod "kube-scheduler-ha-671000-m02" in "kube-system" namespace to be "Ready" ...
	I0505 14:27:21.385398   56422 pod_ready.go:38] duration metric: took 28.715865393s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0505 14:27:21.385411   56422 system_svc.go:44] waiting for kubelet service to be running ....
	I0505 14:27:21.385463   56422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 14:27:21.397153   56422 system_svc.go:56] duration metric: took 11.738357ms WaitForService to wait for kubelet
	I0505 14:27:21.397168   56422 kubeadm.go:576] duration metric: took 29.438456777s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0505 14:27:21.397186   56422 node_conditions.go:102] verifying NodePressure condition ...
	I0505 14:27:21.582289   56422 request.go:629] Waited for 185.06224ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.51:8443/api/v1/nodes
	I0505 14:27:21.582369   56422 round_trippers.go:463] GET https://192.169.0.51:8443/api/v1/nodes
	I0505 14:27:21.582380   56422 round_trippers.go:469] Request Headers:
	I0505 14:27:21.582405   56422 round_trippers.go:473]     Accept: application/json, */*
	I0505 14:27:21.582414   56422 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0505 14:27:21.585988   56422 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0505 14:27:21.586879   56422 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 14:27:21.586889   56422 node_conditions.go:123] node cpu capacity is 2
	I0505 14:27:21.586896   56422 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 14:27:21.586899   56422 node_conditions.go:123] node cpu capacity is 2
	I0505 14:27:21.586902   56422 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 14:27:21.586905   56422 node_conditions.go:123] node cpu capacity is 2
	I0505 14:27:21.586908   56422 node_conditions.go:105] duration metric: took 189.720407ms to run NodePressure ...
	I0505 14:27:21.586917   56422 start.go:240] waiting for startup goroutines ...
	I0505 14:27:21.586931   56422 start.go:254] writing updated cluster config ...
	I0505 14:27:21.587284   56422 ssh_runner.go:195] Run: rm -f paused
	I0505 14:27:21.626923   56422 start.go:600] kubectl: 1.29.2, cluster: 1.30.0 (minor skew: 1)
	I0505 14:27:21.649132   56422 out.go:177] * Done! kubectl is now configured to use "ha-671000" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 05 21:26:51 ha-671000 dockerd[1112]: time="2024-05-05T21:26:51.497800034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:26:51 ha-671000 dockerd[1112]: time="2024-05-05T21:26:51.721452946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 21:26:51 ha-671000 dockerd[1112]: time="2024-05-05T21:26:51.721770126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 21:26:51 ha-671000 dockerd[1112]: time="2024-05-05T21:26:51.721903771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:26:51 ha-671000 dockerd[1112]: time="2024-05-05T21:26:51.722009651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:26:52 ha-671000 dockerd[1112]: time="2024-05-05T21:26:52.486705370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 21:26:52 ha-671000 dockerd[1112]: time="2024-05-05T21:26:52.486917794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 21:26:52 ha-671000 dockerd[1112]: time="2024-05-05T21:26:52.486953415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:26:52 ha-671000 dockerd[1112]: time="2024-05-05T21:26:52.487139970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:27:14 ha-671000 dockerd[1112]: time="2024-05-05T21:27:14.483983990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 21:27:14 ha-671000 dockerd[1112]: time="2024-05-05T21:27:14.484058412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 21:27:14 ha-671000 dockerd[1112]: time="2024-05-05T21:27:14.484070913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:27:14 ha-671000 dockerd[1112]: time="2024-05-05T21:27:14.484540238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:27:17 ha-671000 dockerd[1112]: time="2024-05-05T21:27:17.480984139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 21:27:17 ha-671000 dockerd[1112]: time="2024-05-05T21:27:17.481033036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 21:27:17 ha-671000 dockerd[1112]: time="2024-05-05T21:27:17.481041671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:27:17 ha-671000 dockerd[1112]: time="2024-05-05T21:27:17.481332003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:27:20 ha-671000 dockerd[1106]: time="2024-05-05T21:27:20.560108740Z" level=info msg="ignoring event" container=5ee642fe46f459e85759879b7196bd1e9b106055591bf909074e2390b5a44c80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 21:27:20 ha-671000 dockerd[1112]: time="2024-05-05T21:27:20.560274148Z" level=info msg="shim disconnected" id=5ee642fe46f459e85759879b7196bd1e9b106055591bf909074e2390b5a44c80 namespace=moby
	May 05 21:27:20 ha-671000 dockerd[1112]: time="2024-05-05T21:27:20.560521817Z" level=warning msg="cleaning up after shim disconnected" id=5ee642fe46f459e85759879b7196bd1e9b106055591bf909074e2390b5a44c80 namespace=moby
	May 05 21:27:20 ha-671000 dockerd[1112]: time="2024-05-05T21:27:20.560531270Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 21:28:41 ha-671000 dockerd[1112]: time="2024-05-05T21:28:41.478790889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 21:28:41 ha-671000 dockerd[1112]: time="2024-05-05T21:28:41.478849300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 21:28:41 ha-671000 dockerd[1112]: time="2024-05-05T21:28:41.478863963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 21:28:41 ha-671000 dockerd[1112]: time="2024-05-05T21:28:41.479295034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	aeba8364afd2a       6e38f40d628db       5 minutes ago       Running             storage-provisioner       4                   3b57ea16791f2       storage-provisioner
	3fc86ac79228e       cbb01a7bd410d       7 minutes ago       Running             coredns                   2                   349926700ccb5       coredns-7db6d8ff4d-hqtd2
	ea819c5c29934       cbb01a7bd410d       7 minutes ago       Running             coredns                   2                   4a373fbeddb9c       coredns-7db6d8ff4d-kjf54
	c651643d03cb3       a0bf559e280cf       7 minutes ago       Running             kube-proxy                2                   e2353fa9d13e4       kube-proxy-kppdj
	b39b360c62914       8c811b4aec35f       7 minutes ago       Running             busybox                   2                   befe63131d035       busybox-fc5497c4f-lfn9v
	1ba4c448188e7       4950bb10b3f87       7 minutes ago       Running             kindnet-cni               2                   0fdf5f5fd9f2c       kindnet-zvz9x
	5ee642fe46f45       6e38f40d628db       7 minutes ago       Exited              storage-provisioner       3                   3b57ea16791f2       storage-provisioner
	a72db2f28c2ac       c7aad43836fa5       7 minutes ago       Running             kube-controller-manager   4                   94744331c210e       kube-controller-manager-ha-671000
	390838c9a5a33       3861cfcd7c04c       8 minutes ago       Running             etcd                      2                   9771484c9c819       etcd-ha-671000
	14f02fa9268d6       c42f13656d0b2       8 minutes ago       Running             kube-apiserver            2                   5e1e793752549       kube-apiserver-ha-671000
	3d95bc0028aed       22aaebb38f4a9       8 minutes ago       Running             kube-vip                  1                   c1cb953027d69       kube-vip-ha-671000
	07e9b8695bca5       259c8277fcbbc       8 minutes ago       Running             kube-scheduler            2                   f6260131926be       kube-scheduler-ha-671000
	1df6ea0ade29b       c7aad43836fa5       8 minutes ago       Exited              kube-controller-manager   3                   94744331c210e       kube-controller-manager-ha-671000
	4e72d733bb177       cbb01a7bd410d       11 minutes ago      Exited              coredns                   1                   17013aecf8e89       coredns-7db6d8ff4d-hqtd2
	a5ba9a7a24b6f       cbb01a7bd410d       11 minutes ago      Exited              coredns                   1                   5a876c8ef945c       coredns-7db6d8ff4d-kjf54
	c048dc81e6392       4950bb10b3f87       12 minutes ago      Exited              kindnet-cni               1                   382155dbcfe93       kindnet-zvz9x
	76503e51b3afa       8c811b4aec35f       12 minutes ago      Exited              busybox                   1                   8637a9efa2c11       busybox-fc5497c4f-lfn9v
	7001a9c78d0af       a0bf559e280cf       12 minutes ago      Exited              kube-proxy                1                   f930d07fb2b00       kube-proxy-kppdj
	0faa6b8c33ebd       c42f13656d0b2       13 minutes ago      Exited              kube-apiserver            1                   70fab261c2b17       kube-apiserver-ha-671000
	0c29a1524fb04       22aaebb38f4a9       13 minutes ago      Exited              kube-vip                  0                   2c44ab6fb1b45       kube-vip-ha-671000
	06468c7f97645       3861cfcd7c04c       13 minutes ago      Exited              etcd                      1                   7eb485f57bef9       etcd-ha-671000
	09b069cddaf09       259c8277fcbbc       13 minutes ago      Exited              kube-scheduler            1                   0b3f9b67d960c       kube-scheduler-ha-671000
	
	
	==> coredns [3fc86ac79228] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34141 - 17091 "HINFO IN 3081903063953944201.982792549402443733. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.011861182s
	
	
	==> coredns [4e72d733bb17] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60404 - 16395 "HINFO IN 7673949606304789129.6924752665992071371. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01220844s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a5ba9a7a24b6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54698 - 36003 "HINFO IN 1073736587953336830.7574535335510144074. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015279179s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ea819c5c2993] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58789 - 49444 "HINFO IN 3396099655446756554.877284966826240622. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.011978796s
	
	
	==> describe nodes <==
	Name:               ha-671000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-671000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_05T14_15_29_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:15:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:34:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:31:37 +0000   Sun, 05 May 2024 21:15:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:31:37 +0000   Sun, 05 May 2024 21:15:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:31:37 +0000   Sun, 05 May 2024 21:15:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:31:37 +0000   Sun, 05 May 2024 21:15:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.51
	  Hostname:    ha-671000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 3d230a7cafaf41df8a4b7d001337d6d8
	  System UUID:                93894e2d-0000-0000-8cc9-aa1a138ddf96
	  Boot ID:                    b17cc39f-91ec-4e35-b03c-82a3b2c1973e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lfn9v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 coredns-7db6d8ff4d-hqtd2             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 coredns-7db6d8ff4d-kjf54             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-ha-671000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-zvz9x                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-apiserver-ha-671000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-671000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-kppdj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-ha-671000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-671000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m34s                  kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 18m                    kube-proxy       
	  Normal  Starting                 19m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)      kubelet          Node ha-671000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)      kubelet          Node ha-671000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)      kubelet          Node ha-671000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                    kubelet          Node ha-671000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     19m                    kubelet          Node ha-671000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  19m                    kubelet          Node ha-671000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 19m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           18m                    node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	  Normal  NodeReady                18m                    kubelet          Node ha-671000 status is now: NodeReady
	  Normal  RegisteredNode           17m                    node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	  Normal  RegisteredNode           16m                    node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	  Normal  Starting                 13m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-671000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-671000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-671000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                    node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	  Normal  NodeHasSufficientMemory  8m34s (x8 over 8m34s)  kubelet          Node ha-671000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 8m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    8m34s (x8 over 8m34s)  kubelet          Node ha-671000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m34s (x7 over 8m34s)  kubelet          Node ha-671000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m53s                  node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	  Normal  RegisteredNode           7m44s                  node-controller  Node ha-671000 event: Registered Node ha-671000 in Controller
	
	
	Name:               ha-671000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-671000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T14_16_38_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:16:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:34:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:31:30 +0000   Sun, 05 May 2024 21:16:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:31:30 +0000   Sun, 05 May 2024 21:16:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:31:30 +0000   Sun, 05 May 2024 21:16:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:31:30 +0000   Sun, 05 May 2024 21:16:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.52
	  Hostname:    ha-671000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 1c0df383c97842c08430f77b7bc48528
	  System UUID:                294b4d68-0000-0000-b3f3-54381951a5e8
	  Boot ID:                    80e85e7a-ced9-4f31-aa38-9d1228dcda92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q27t4                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 etcd-ha-671000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-kn94d                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-671000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-671000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-5jwqs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-671000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-671000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m58s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)      kubelet          Node ha-671000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)      kubelet          Node ha-671000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)      kubelet          Node ha-671000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                    node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	  Normal   RegisteredNode           16m                    node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-671000-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-671000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-671000-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 14m                    kubelet          Node ha-671000-m02 has been rebooted, boot id: 4c58d033-04b8-4c15-8fdc-920ae431b3e3
	  Normal   RegisteredNode           14m                    node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-671000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-671000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-671000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	  Normal   Starting                 8m15s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  8m15s (x8 over 8m15s)  kubelet          Node ha-671000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m15s (x8 over 8m15s)  kubelet          Node ha-671000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m15s (x7 over 8m15s)  kubelet          Node ha-671000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  8m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           7m53s                  node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	  Normal   RegisteredNode           7m44s                  node-controller  Node ha-671000-m02 event: Registered Node ha-671000-m02 in Controller
	
	
	Name:               ha-671000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-671000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3
	                    minikube.k8s.io/name=ha-671000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_05T14_18_38_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 05 May 2024 21:18:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-671000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 05 May 2024 21:34:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 05 May 2024 21:32:27 +0000   Sun, 05 May 2024 21:26:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 05 May 2024 21:32:27 +0000   Sun, 05 May 2024 21:26:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 05 May 2024 21:32:27 +0000   Sun, 05 May 2024 21:26:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 05 May 2024 21:32:27 +0000   Sun, 05 May 2024 21:26:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.54
	  Hostname:    ha-671000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc491689c6564559b23ce91c9bb9b866
	  System UUID:                8d0f44c8-0000-0000-aaa8-77d77d483dce
	  Boot ID:                    3d683a82-75dd-48c4-80b7-df840db23f51
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-zc2ns    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-ffg2p              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-proxy-b45s6           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m33s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                    node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal   NodeHasSufficientMemory  15m (x2 over 15m)      kubelet          Node ha-671000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x2 over 15m)      kubelet          Node ha-671000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x2 over 15m)      kubelet          Node ha-671000-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                    node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-671000-m04 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal   NodeNotReady             11m                    node-controller  Node ha-671000-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           7m53s                  node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal   RegisteredNode           7m44s                  node-controller  Node ha-671000-m04 event: Registered Node ha-671000-m04 in Controller
	  Normal   Starting                 7m35s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  7m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m35s (x2 over 7m35s)  kubelet          Node ha-671000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m35s (x2 over 7m35s)  kubelet          Node ha-671000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m35s (x2 over 7m35s)  kubelet          Node ha-671000-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 7m35s                  kubelet          Node ha-671000-m04 has been rebooted, boot id: 3d683a82-75dd-48c4-80b7-df840db23f51
	  Normal   NodeReady                7m35s                  kubelet          Node ha-671000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.007942] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.387169] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000057] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.006639] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.647493] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.238475] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.968528] systemd-fstab-generator[471]: Ignoring "noauto" option for root device
	[  +0.103422] systemd-fstab-generator[487]: Ignoring "noauto" option for root device
	[  +1.915992] systemd-fstab-generator[973]: Ignoring "noauto" option for root device
	[  +0.059413] kauditd_printk_skb: 81 callbacks suppressed
	[  +0.217321] systemd-fstab-generator[1071]: Ignoring "noauto" option for root device
	[  +0.098974] systemd-fstab-generator[1083]: Ignoring "noauto" option for root device
	[  +0.121763] systemd-fstab-generator[1097]: Ignoring "noauto" option for root device
	[  +2.424224] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	[  +0.098050] systemd-fstab-generator[1282]: Ignoring "noauto" option for root device
	[  +0.104811] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.132144] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.409354] systemd-fstab-generator[1456]: Ignoring "noauto" option for root device
	[  +6.460683] kauditd_printk_skb: 237 callbacks suppressed
	[May 5 21:26] kauditd_printk_skb: 40 callbacks suppressed
	[ +28.475318] kauditd_printk_skb: 25 callbacks suppressed
	[May 5 21:27] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.021526] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [06468c7f9764] <==
	{"level":"warn","ts":"2024-05-05T21:25:27.43193Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:25:22.679417Z","time spent":"4.75251126s","remote":"127.0.0.1:54290","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/05/05 21:25:27 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-05-05T21:25:27.430719Z","caller":"traceutil/trace.go:171","msg":"trace[781495231] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; }","duration":"3.246602059s","start":"2024-05-05T21:25:24.184114Z","end":"2024-05-05T21:25:27.430716Z","steps":["trace[781495231] 'agreement among raft nodes before linearized reading'  (duration: 3.246590392s)"],"step_count":1}
	{"level":"warn","ts":"2024-05-05T21:25:27.432125Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-05T21:25:24.184106Z","time spent":"3.247993414s","remote":"127.0.0.1:54456","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":0,"response size":0,"request content":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true "}
	2024/05/05 21:25:27 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-05-05T21:25:27.455263Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.51:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-05T21:25:27.455288Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.51:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-05T21:25:27.455335Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"1792221d12ca7193","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-05T21:25:27.45544Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:25:27.455453Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:25:27.455467Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:25:27.45556Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1792221d12ca7193","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:25:27.455748Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1792221d12ca7193","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:25:27.4558Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1792221d12ca7193","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:25:27.455813Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"33f5589d0a9a0d8f"}
	{"level":"info","ts":"2024-05-05T21:25:27.455818Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:25:27.455823Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:25:27.455834Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:25:27.456208Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:25:27.456254Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:25:27.456303Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:25:27.456313Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c5e392ded2f33250"}
	{"level":"info","ts":"2024-05-05T21:25:27.460254Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.51:2380"}
	{"level":"info","ts":"2024-05-05T21:25:27.460371Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.51:2380"}
	{"level":"info","ts":"2024-05-05T21:25:27.460379Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-671000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.51:2380"],"advertise-client-urls":["https://192.169.0.51:2379"]}
	
	
	==> etcd [390838c9a5a3] <==
	{"level":"warn","ts":"2024-05-05T21:34:27.428153Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.43897Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.445235Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.452364Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.458543Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.462102Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.46817Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.472153Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.472446Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.474978Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.477248Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.486997Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.540138Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.542758Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.546675Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.550904Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.554286Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.559895Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.563927Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.570801Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.671099Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.747246Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.771207Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.870226Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-05T21:34:27.877512Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1792221d12ca7193","from":"1792221d12ca7193","remote-peer-id":"c5e392ded2f33250","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 21:34:27 up 8 min,  0 users,  load average: 0.14, 0.18, 0.10
	Linux ha-671000 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1ba4c448188e] <==
	I0505 21:33:42.546051       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:33:52.550861       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:33:52.550925       1 main.go:227] handling current node
	I0505 21:33:52.550944       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:33:52.550981       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:33:52.551112       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:33:52.551184       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:34:02.556497       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:34:02.556726       1 main.go:227] handling current node
	I0505 21:34:02.556849       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:34:02.556930       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:34:02.557093       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:34:02.557190       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:34:12.561032       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:34:12.561110       1 main.go:227] handling current node
	I0505 21:34:12.561158       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:34:12.561238       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:34:12.561386       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:34:12.561432       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:34:22.572835       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:34:22.572868       1 main.go:227] handling current node
	I0505 21:34:22.572876       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:34:22.572880       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:34:22.573135       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:34:22.573166       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c048dc81e639] <==
	I0505 21:24:30.670666       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:24:40.675334       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:24:40.675403       1 main.go:227] handling current node
	I0505 21:24:40.675421       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:24:40.675492       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:24:40.675730       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:24:40.675803       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:24:50.686229       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:24:50.686265       1 main.go:227] handling current node
	I0505 21:24:50.686273       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:24:50.686278       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:24:50.686478       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:24:50.686655       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:25:00.693268       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:25:00.693351       1 main.go:227] handling current node
	I0505 21:25:00.693501       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:25:00.693551       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:25:00.693728       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:25:00.693813       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	I0505 21:25:10.699369       1 main.go:223] Handling node with IPs: map[192.169.0.51:{}]
	I0505 21:25:10.699406       1 main.go:227] handling current node
	I0505 21:25:10.699414       1 main.go:223] Handling node with IPs: map[192.169.0.52:{}]
	I0505 21:25:10.699418       1 main.go:250] Node ha-671000-m02 has CIDR [10.244.1.0/24] 
	I0505 21:25:10.699596       1 main.go:223] Handling node with IPs: map[192.169.0.54:{}]
	I0505 21:25:10.699626       1 main.go:250] Node ha-671000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0faa6b8c33eb] <==
	W0505 21:25:27.452207       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.452229       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.452255       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.452281       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.452305       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.452327       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.452351       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.452376       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.452402       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.452425       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.452484       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.452510       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.452546       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.452587       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.452620       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.453204       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.453237       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.453261       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.453298       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.453319       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.453340       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.453360       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0505 21:25:27.453380       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0505 21:25:27.454941       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0505 21:25:27.455030       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [14f02fa9268d] <==
	I0505 21:26:21.939152       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0505 21:26:21.939175       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0505 21:26:21.939382       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0505 21:26:22.031996       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0505 21:26:22.044482       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0505 21:26:22.078756       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0505 21:26:22.081015       1 aggregator.go:165] initial CRD sync complete...
	I0505 21:26:22.081052       1 autoregister_controller.go:141] Starting autoregister controller
	I0505 21:26:22.081058       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0505 21:26:22.081062       1 cache.go:39] Caches are synced for autoregister controller
	I0505 21:26:22.086323       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0505 21:26:22.086436       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0505 21:26:22.086517       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0505 21:26:22.089566       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0505 21:26:22.090613       1 shared_informer.go:320] Caches are synced for configmaps
	I0505 21:26:22.091228       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0505 21:26:22.092827       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0505 21:26:22.092910       1 policy_source.go:224] refreshing policies
	W0505 21:26:22.117650       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.52]
	I0505 21:26:22.119501       1 controller.go:615] quota admission added evaluator for: endpoints
	I0505 21:26:22.132917       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0505 21:26:22.137683       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0505 21:26:22.159674       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0505 21:26:22.934578       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0505 21:26:23.250189       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.51]
	
	
	==> kube-controller-manager [1df6ea0ade29] <==
	I0505 21:26:00.873493       1 serving.go:380] Generated self-signed cert in-memory
	I0505 21:26:01.251942       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0505 21:26:01.251986       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:26:01.255676       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0505 21:26:01.255825       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0505 21:26:01.256306       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0505 21:26:01.256459       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0505 21:26:22.016281       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [a72db2f28c2a] <==
	I0505 21:26:44.147388       1 shared_informer.go:320] Caches are synced for resource quota
	I0505 21:26:44.148738       1 shared_informer.go:320] Caches are synced for resource quota
	I0505 21:26:44.571876       1 shared_informer.go:320] Caches are synced for garbage collector
	I0505 21:26:44.633220       1 shared_informer.go:320] Caches are synced for garbage collector
	I0505 21:26:44.633268       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0505 21:26:52.423176       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-671000-m04"
	I0505 21:26:52.470257       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.914µs"
	I0505 21:26:52.479667       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.632µs"
	I0505 21:26:52.898090       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.556752ms"
	I0505 21:26:52.898195       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.149µs"
	I0505 21:26:53.301947       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.142µs"
	I0505 21:26:55.430834       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.302048ms"
	I0505 21:26:55.431072       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.589µs"
	I0505 21:27:03.447603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.332µs"
	I0505 21:27:06.443927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.878µs"
	I0505 21:27:15.062652       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="80.548µs"
	I0505 21:27:15.082049       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.845168ms"
	I0505 21:27:15.082122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.306µs"
	I0505 21:27:15.093835       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-qfwk6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-qfwk6\": the object has been modified; please apply your changes to the latest version and try again"
	I0505 21:27:15.094151       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"bea99034-e1b7-4a88-8a06-fbc74abeaaf9", APIVersion:"v1", ResourceVersion:"296", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-qfwk6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-qfwk6": the object has been modified; please apply your changes to the latest version and try again
	I0505 21:27:18.094225       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.043µs"
	I0505 21:27:18.113246       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-qfwk6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-qfwk6\": the object has been modified; please apply your changes to the latest version and try again"
	I0505 21:27:18.113912       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"bea99034-e1b7-4a88-8a06-fbc74abeaaf9", APIVersion:"v1", ResourceVersion:"296", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-qfwk6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-qfwk6": the object has been modified; please apply your changes to the latest version and try again
	I0505 21:27:18.114557       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="8.400597ms"
	I0505 21:27:18.115413       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.247µs"
	
	
	==> kube-proxy [7001a9c78d0a] <==
	I0505 21:22:05.427749       1 server_linux.go:69] "Using iptables proxy"
	I0505 21:22:05.441644       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.51"]
	I0505 21:22:05.545461       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 21:22:05.545682       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 21:22:05.545778       1 server_linux.go:165] "Using iptables Proxier"
	I0505 21:22:05.548756       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 21:22:05.549189       1 server.go:872] "Version info" version="v1.30.0"
	I0505 21:22:05.549278       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:22:05.551545       1 config.go:192] "Starting service config controller"
	I0505 21:22:05.551674       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 21:22:05.551761       1 config.go:101] "Starting endpoint slice config controller"
	I0505 21:22:05.551848       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 21:22:05.552969       1 config.go:319] "Starting node config controller"
	I0505 21:22:05.553109       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0505 21:22:05.652764       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0505 21:22:05.652801       1 shared_informer.go:320] Caches are synced for service config
	I0505 21:22:05.653231       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c651643d03cb] <==
	I0505 21:26:52.665456       1 server_linux.go:69] "Using iptables proxy"
	I0505 21:26:52.679499       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.51"]
	I0505 21:26:52.714338       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0505 21:26:52.714394       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0505 21:26:52.714408       1 server_linux.go:165] "Using iptables Proxier"
	I0505 21:26:52.716946       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0505 21:26:52.717319       1 server.go:872] "Version info" version="v1.30.0"
	I0505 21:26:52.717348       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:26:52.719316       1 config.go:192] "Starting service config controller"
	I0505 21:26:52.719524       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0505 21:26:52.719745       1 config.go:101] "Starting endpoint slice config controller"
	I0505 21:26:52.719771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0505 21:26:52.721534       1 config.go:319] "Starting node config controller"
	I0505 21:26:52.721613       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0505 21:26:52.823212       1 shared_informer.go:320] Caches are synced for service config
	I0505 21:26:52.823826       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0505 21:26:52.823860       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [07e9b8695bca] <==
	I0505 21:26:01.127092       1 serving.go:380] Generated self-signed cert in-memory
	W0505 21:26:11.641512       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.169.0.51:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0505 21:26:11.641556       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0505 21:26:11.641562       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0505 21:26:21.646434       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0505 21:26:21.646596       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:26:21.649129       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0505 21:26:21.649368       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0505 21:26:21.649459       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0505 21:26:21.649558       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0505 21:26:22.150047       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0505 21:26:52.467880       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-zc2ns\": pod busybox-fc5497c4f-zc2ns is already assigned to node \"ha-671000-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-zc2ns" node="ha-671000-m04"
	E0505 21:26:52.467923       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 52a6dd23-85c1-4e9a-a716-0e1665b36649(default/busybox-fc5497c4f-zc2ns) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-zc2ns"
	E0505 21:26:52.467940       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-zc2ns\": pod busybox-fc5497c4f-zc2ns is already assigned to node \"ha-671000-m04\"" pod="default/busybox-fc5497c4f-zc2ns"
	I0505 21:26:52.467952       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-zc2ns" node="ha-671000-m04"
	
	
	==> kube-scheduler [09b069cddaf0] <==
	I0505 21:21:17.140666       1 serving.go:380] Generated self-signed cert in-memory
	W0505 21:21:27.959721       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.169.0.51:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0505 21:21:27.959770       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0505 21:21:27.959776       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0505 21:21:37.325220       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0505 21:21:37.325291       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0505 21:21:37.336314       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0505 21:21:37.337352       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0505 21:21:37.337505       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0505 21:21:37.341283       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0505 21:21:37.438307       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0505 21:25:27.301449       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 05 21:29:53 ha-671000 kubelet[1463]: E0505 21:29:53.447359    1463 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:29:53 ha-671000 kubelet[1463]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:29:53 ha-671000 kubelet[1463]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:29:53 ha-671000 kubelet[1463]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:29:53 ha-671000 kubelet[1463]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:30:53 ha-671000 kubelet[1463]: E0505 21:30:53.445213    1463 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:30:53 ha-671000 kubelet[1463]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:30:53 ha-671000 kubelet[1463]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:30:53 ha-671000 kubelet[1463]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:30:53 ha-671000 kubelet[1463]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:31:53 ha-671000 kubelet[1463]: E0505 21:31:53.446741    1463 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:31:53 ha-671000 kubelet[1463]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:31:53 ha-671000 kubelet[1463]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:31:53 ha-671000 kubelet[1463]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:31:53 ha-671000 kubelet[1463]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:32:53 ha-671000 kubelet[1463]: E0505 21:32:53.445980    1463 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:32:53 ha-671000 kubelet[1463]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:32:53 ha-671000 kubelet[1463]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:32:53 ha-671000 kubelet[1463]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:32:53 ha-671000 kubelet[1463]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 05 21:33:53 ha-671000 kubelet[1463]: E0505 21:33:53.445621    1463 iptables.go:577] "Could not set up iptables canary" err=<
	May 05 21:33:53 ha-671000 kubelet[1463]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 05 21:33:53 ha-671000 kubelet[1463]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 05 21:33:53 ha-671000 kubelet[1463]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 05 21:33:53 ha-671000 kubelet[1463]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-671000 -n ha-671000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-671000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (427.09s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (194.67s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-645000 --alsologtostderr -v=1 --driver=hyperkit 
pause_test.go:92: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-645000 --alsologtostderr -v=1 --driver=hyperkit : exit status 90 (1m13.74700697s)

                                                
                                                
-- stdout --
	* [pause-645000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "pause-645000" primary control-plane node in "pause-645000" cluster
	* Updating the running hyperkit "pause-645000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 15:05:35.312703   58814 out.go:291] Setting OutFile to fd 1 ...
	I0505 15:05:35.312972   58814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 15:05:35.312977   58814 out.go:304] Setting ErrFile to fd 2...
	I0505 15:05:35.312981   58814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 15:05:35.313170   58814 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
	I0505 15:05:35.314708   58814 out.go:298] Setting JSON to false
	I0505 15:05:35.338496   58814 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":21906,"bootTime":1714924829,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0505 15:05:35.338600   58814 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 15:05:35.359806   58814 out.go:177] * [pause-645000] minikube v1.33.0 on Darwin 14.4.1
	I0505 15:05:35.425359   58814 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 15:05:35.405698   58814 notify.go:220] Checking for updates...
	I0505 15:05:35.468446   58814 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 15:05:35.510571   58814 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0505 15:05:35.552617   58814 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 15:05:35.594400   58814 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
	I0505 15:05:35.615520   58814 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 15:05:35.637069   58814 config.go:182] Loaded profile config "pause-645000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 15:05:35.637455   58814 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 15:05:35.637508   58814 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 15:05:35.646599   58814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60620
	I0505 15:05:35.646989   58814 main.go:141] libmachine: () Calling .GetVersion
	I0505 15:05:35.647409   58814 main.go:141] libmachine: Using API Version  1
	I0505 15:05:35.647420   58814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 15:05:35.647648   58814 main.go:141] libmachine: () Calling .GetMachineName
	I0505 15:05:35.647787   58814 main.go:141] libmachine: (pause-645000) Calling .DriverName
	I0505 15:05:35.647985   58814 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 15:05:35.648244   58814 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 15:05:35.648287   58814 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 15:05:35.656973   58814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60622
	I0505 15:05:35.657334   58814 main.go:141] libmachine: () Calling .GetVersion
	I0505 15:05:35.657700   58814 main.go:141] libmachine: Using API Version  1
	I0505 15:05:35.657709   58814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 15:05:35.657949   58814 main.go:141] libmachine: () Calling .GetMachineName
	I0505 15:05:35.658059   58814 main.go:141] libmachine: (pause-645000) Calling .DriverName
	I0505 15:05:35.686393   58814 out.go:177] * Using the hyperkit driver based on existing profile
	I0505 15:05:35.726602   58814 start.go:297] selected driver: hyperkit
	I0505 15:05:35.726619   58814 start.go:901] validating driver "hyperkit" against &{Name:pause-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.0 ClusterName:pause-645000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.73 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 15:05:35.726746   58814 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 15:05:35.726863   58814 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 15:05:35.726978   58814 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18602-53665/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0505 15:05:35.735958   58814 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0505 15:05:35.739959   58814 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 15:05:35.739984   58814 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0505 15:05:35.742899   58814 cni.go:84] Creating CNI manager for ""
	I0505 15:05:35.742927   58814 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 15:05:35.743016   58814 start.go:340] cluster config:
	{Name:pause-645000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-645000 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.73 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false reg
istry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 15:05:35.743136   58814 iso.go:125] acquiring lock: {Name:mk0da19ac8d2d553b5039d86a6857a5ca35625a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 15:05:35.785556   58814 out.go:177] * Starting "pause-645000" primary control-plane node in "pause-645000" cluster
	I0505 15:05:35.806740   58814 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 15:05:35.806816   58814 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0505 15:05:35.806838   58814 cache.go:56] Caching tarball of preloaded images
	I0505 15:05:35.806983   58814 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0505 15:05:35.806996   58814 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 15:05:35.807140   58814 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/pause-645000/config.json ...
	I0505 15:05:35.807774   58814 start.go:360] acquireMachinesLock for pause-645000: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 15:05:35.807848   58814 start.go:364] duration metric: took 56.701µs to acquireMachinesLock for "pause-645000"
	I0505 15:05:35.807876   58814 start.go:96] Skipping create...Using existing machine configuration
	I0505 15:05:35.807888   58814 fix.go:54] fixHost starting: 
	I0505 15:05:35.808198   58814 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 15:05:35.808227   58814 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 15:05:35.817453   58814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60624
	I0505 15:05:35.817800   58814 main.go:141] libmachine: () Calling .GetVersion
	I0505 15:05:35.818165   58814 main.go:141] libmachine: Using API Version  1
	I0505 15:05:35.818181   58814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 15:05:35.818426   58814 main.go:141] libmachine: () Calling .GetMachineName
	I0505 15:05:35.818541   58814 main.go:141] libmachine: (pause-645000) Calling .DriverName
	I0505 15:05:35.818634   58814 main.go:141] libmachine: (pause-645000) Calling .GetState
	I0505 15:05:35.818715   58814 main.go:141] libmachine: (pause-645000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 15:05:35.818816   58814 main.go:141] libmachine: (pause-645000) DBG | hyperkit pid from json: 58574
	I0505 15:05:35.819783   58814 fix.go:112] recreateIfNeeded on pause-645000: state=Running err=<nil>
	W0505 15:05:35.819802   58814 fix.go:138] unexpected machine state, will restart: <nil>
	I0505 15:05:35.861564   58814 out.go:177] * Updating the running hyperkit "pause-645000" VM ...
	I0505 15:05:35.882529   58814 machine.go:94] provisionDockerMachine start ...
	I0505 15:05:35.882556   58814 main.go:141] libmachine: (pause-645000) Calling .DriverName
	I0505 15:05:35.882743   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHHostname
	I0505 15:05:35.882861   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHPort
	I0505 15:05:35.882980   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHKeyPath
	I0505 15:05:35.883093   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHKeyPath
	I0505 15:05:35.883177   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHUsername
	I0505 15:05:35.883310   58814 main.go:141] libmachine: Using SSH client type: native
	I0505 15:05:35.883536   58814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4ad3b80] 0x4ad68e0 <nil>  [] 0s} 192.169.0.73 22 <nil> <nil>}
	I0505 15:05:35.883544   58814 main.go:141] libmachine: About to run SSH command:
	hostname
	I0505 15:05:35.949185   58814 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-645000
	
	I0505 15:05:35.949201   58814 main.go:141] libmachine: (pause-645000) Calling .GetMachineName
	I0505 15:05:35.949335   58814 buildroot.go:166] provisioning hostname "pause-645000"
	I0505 15:05:35.949347   58814 main.go:141] libmachine: (pause-645000) Calling .GetMachineName
	I0505 15:05:35.949441   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHHostname
	I0505 15:05:35.949519   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHPort
	I0505 15:05:35.949604   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHKeyPath
	I0505 15:05:35.949687   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHKeyPath
	I0505 15:05:35.949768   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHUsername
	I0505 15:05:35.949896   58814 main.go:141] libmachine: Using SSH client type: native
	I0505 15:05:35.950024   58814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4ad3b80] 0x4ad68e0 <nil>  [] 0s} 192.169.0.73 22 <nil> <nil>}
	I0505 15:05:35.950039   58814 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-645000 && echo "pause-645000" | sudo tee /etc/hostname
	I0505 15:05:36.025808   58814 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-645000
	
	I0505 15:05:36.025829   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHHostname
	I0505 15:05:36.025985   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHPort
	I0505 15:05:36.026077   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHKeyPath
	I0505 15:05:36.026200   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHKeyPath
	I0505 15:05:36.026298   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHUsername
	I0505 15:05:36.026418   58814 main.go:141] libmachine: Using SSH client type: native
	I0505 15:05:36.026567   58814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4ad3b80] 0x4ad68e0 <nil>  [] 0s} 192.169.0.73 22 <nil> <nil>}
	I0505 15:05:36.026584   58814 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-645000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-645000/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-645000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 15:05:36.089809   58814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 15:05:36.089828   58814 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
	I0505 15:05:36.089852   58814 buildroot.go:174] setting up certificates
	I0505 15:05:36.089859   58814 provision.go:84] configureAuth start
	I0505 15:05:36.089866   58814 main.go:141] libmachine: (pause-645000) Calling .GetMachineName
	I0505 15:05:36.089999   58814 main.go:141] libmachine: (pause-645000) Calling .GetIP
	I0505 15:05:36.090092   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHHostname
	I0505 15:05:36.090194   58814 provision.go:143] copyHostCerts
	I0505 15:05:36.090276   58814 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
	I0505 15:05:36.090287   58814 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 15:05:36.090475   58814 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
	I0505 15:05:36.090744   58814 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
	I0505 15:05:36.090751   58814 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 15:05:36.090831   58814 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
	I0505 15:05:36.091044   58814 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
	I0505 15:05:36.091054   58814 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 15:05:36.091130   58814 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
	I0505 15:05:36.091292   58814 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.pause-645000 san=[127.0.0.1 192.169.0.73 localhost minikube pause-645000]
	I0505 15:05:36.161331   58814 provision.go:177] copyRemoteCerts
	I0505 15:05:36.161397   58814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 15:05:36.161415   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHHostname
	I0505 15:05:36.161572   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHPort
	I0505 15:05:36.161676   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHKeyPath
	I0505 15:05:36.161780   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHUsername
	I0505 15:05:36.161877   58814 sshutil.go:53] new ssh client: &{IP:192.169.0.73 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/pause-645000/id_rsa Username:docker}
	I0505 15:05:36.201283   58814 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 15:05:36.223299   58814 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0505 15:05:36.245057   58814 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0505 15:05:36.266618   58814 provision.go:87] duration metric: took 176.745066ms to configureAuth
	I0505 15:05:36.266632   58814 buildroot.go:189] setting minikube options for container-runtime
	I0505 15:05:36.266795   58814 config.go:182] Loaded profile config "pause-645000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 15:05:36.266808   58814 main.go:141] libmachine: (pause-645000) Calling .DriverName
	I0505 15:05:36.266943   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHHostname
	I0505 15:05:36.267025   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHPort
	I0505 15:05:36.267119   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHKeyPath
	I0505 15:05:36.267213   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHKeyPath
	I0505 15:05:36.267299   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHUsername
	I0505 15:05:36.267415   58814 main.go:141] libmachine: Using SSH client type: native
	I0505 15:05:36.267545   58814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4ad3b80] 0x4ad68e0 <nil>  [] 0s} 192.169.0.73 22 <nil> <nil>}
	I0505 15:05:36.267552   58814 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 15:05:36.333197   58814 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 15:05:36.333214   58814 buildroot.go:70] root file system type: tmpfs
	I0505 15:05:36.333293   58814 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 15:05:36.333313   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHHostname
	I0505 15:05:36.333445   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHPort
	I0505 15:05:36.333537   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHKeyPath
	I0505 15:05:36.333650   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHKeyPath
	I0505 15:05:36.333749   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHUsername
	I0505 15:05:36.333878   58814 main.go:141] libmachine: Using SSH client type: native
	I0505 15:05:36.334053   58814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4ad3b80] 0x4ad68e0 <nil>  [] 0s} 192.169.0.73 22 <nil> <nil>}
	I0505 15:05:36.334098   58814 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 15:05:36.410323   58814 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 15:05:36.410347   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHHostname
	I0505 15:05:36.410482   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHPort
	I0505 15:05:36.410589   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHKeyPath
	I0505 15:05:36.410725   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHKeyPath
	I0505 15:05:36.410834   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHUsername
	I0505 15:05:36.410973   58814 main.go:141] libmachine: Using SSH client type: native
	I0505 15:05:36.411113   58814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4ad3b80] 0x4ad68e0 <nil>  [] 0s} 192.169.0.73 22 <nil> <nil>}
	I0505 15:05:36.411125   58814 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 15:05:36.480557   58814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 15:05:36.480571   58814 machine.go:97] duration metric: took 598.033793ms to provisionDockerMachine
	I0505 15:05:36.480577   58814 start.go:293] postStartSetup for "pause-645000" (driver="hyperkit")
	I0505 15:05:36.480585   58814 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 15:05:36.480595   58814 main.go:141] libmachine: (pause-645000) Calling .DriverName
	I0505 15:05:36.480771   58814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 15:05:36.480783   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHHostname
	I0505 15:05:36.480884   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHPort
	I0505 15:05:36.480984   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHKeyPath
	I0505 15:05:36.481096   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHUsername
	I0505 15:05:36.481181   58814 sshutil.go:53] new ssh client: &{IP:192.169.0.73 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/pause-645000/id_rsa Username:docker}
	I0505 15:05:36.521303   58814 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 15:05:36.524988   58814 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 15:05:36.525003   58814 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
	I0505 15:05:36.525092   58814 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
	I0505 15:05:36.525248   58814 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
	I0505 15:05:36.525445   58814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 15:05:36.532712   58814 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
	I0505 15:05:36.555425   58814 start.go:296] duration metric: took 74.83945ms for postStartSetup
	I0505 15:05:36.555452   58814 fix.go:56] duration metric: took 747.573087ms for fixHost
	I0505 15:05:36.555467   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHHostname
	I0505 15:05:36.555605   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHPort
	I0505 15:05:36.555714   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHKeyPath
	I0505 15:05:36.555823   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHKeyPath
	I0505 15:05:36.555939   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHUsername
	I0505 15:05:36.556075   58814 main.go:141] libmachine: Using SSH client type: native
	I0505 15:05:36.556231   58814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4ad3b80] 0x4ad68e0 <nil>  [] 0s} 192.169.0.73 22 <nil> <nil>}
	I0505 15:05:36.556239   58814 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0505 15:05:36.624904   58814 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714946736.859481409
	
	I0505 15:05:36.624917   58814 fix.go:216] guest clock: 1714946736.859481409
	I0505 15:05:36.624923   58814 fix.go:229] Guest: 2024-05-05 15:05:36.859481409 -0700 PDT Remote: 2024-05-05 15:05:36.555455 -0700 PDT m=+1.287306448 (delta=304.026409ms)
	I0505 15:05:36.624942   58814 fix.go:200] guest clock delta is within tolerance: 304.026409ms
	I0505 15:05:36.624947   58814 start.go:83] releasing machines lock for "pause-645000", held for 817.096389ms
	I0505 15:05:36.624966   58814 main.go:141] libmachine: (pause-645000) Calling .DriverName
	I0505 15:05:36.625117   58814 main.go:141] libmachine: (pause-645000) Calling .GetIP
	I0505 15:05:36.625251   58814 main.go:141] libmachine: (pause-645000) Calling .DriverName
	I0505 15:05:36.625604   58814 main.go:141] libmachine: (pause-645000) Calling .DriverName
	I0505 15:05:36.625738   58814 main.go:141] libmachine: (pause-645000) Calling .DriverName
	I0505 15:05:36.625820   58814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 15:05:36.625850   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHHostname
	I0505 15:05:36.625931   58814 ssh_runner.go:195] Run: cat /version.json
	I0505 15:05:36.625956   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHHostname
	I0505 15:05:36.625963   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHPort
	I0505 15:05:36.626099   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHKeyPath
	I0505 15:05:36.626135   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHPort
	I0505 15:05:36.626317   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHKeyPath
	I0505 15:05:36.626329   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHUsername
	I0505 15:05:36.626427   58814 main.go:141] libmachine: (pause-645000) Calling .GetSSHUsername
	I0505 15:05:36.626472   58814 sshutil.go:53] new ssh client: &{IP:192.169.0.73 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/pause-645000/id_rsa Username:docker}
	I0505 15:05:36.626545   58814 sshutil.go:53] new ssh client: &{IP:192.169.0.73 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/pause-645000/id_rsa Username:docker}
	I0505 15:05:36.665180   58814 ssh_runner.go:195] Run: systemctl --version
	I0505 15:05:36.716360   58814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 15:05:36.720671   58814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 15:05:36.720726   58814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 15:05:36.728646   58814 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0505 15:05:36.728658   58814 start.go:494] detecting cgroup driver to use...
	I0505 15:05:36.728776   58814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 15:05:36.745823   58814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0505 15:05:36.754399   58814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 15:05:36.763159   58814 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 15:05:36.763219   58814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 15:05:36.772297   58814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 15:05:36.781406   58814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 15:05:36.790849   58814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 15:05:36.801644   58814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 15:05:36.812689   58814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 15:05:36.823529   58814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 15:05:36.834019   58814 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 15:05:36.845554   58814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 15:05:36.855692   58814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 15:05:36.865509   58814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 15:05:37.015617   58814 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 15:05:37.035816   58814 start.go:494] detecting cgroup driver to use...
	I0505 15:05:37.035899   58814 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 15:05:37.056595   58814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 15:05:37.073813   58814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 15:05:37.094462   58814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 15:05:37.106420   58814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 15:05:37.117700   58814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 15:05:37.135782   58814 ssh_runner.go:195] Run: which cri-dockerd
	I0505 15:05:37.139467   58814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 15:05:37.147567   58814 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 15:05:37.165549   58814 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 15:05:37.301777   58814 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 15:05:37.447102   58814 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 15:05:37.447195   58814 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 15:05:37.464867   58814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 15:05:37.608599   58814 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 15:06:48.816017   58814 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.207942029s)
	I0505 15:06:48.816090   58814 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0505 15:06:48.866560   58814 out.go:177] 
	W0505 15:06:48.888074   58814 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 05 22:02:20 pause-645000 systemd[1]: Starting Docker Application Container Engine...
	May 05 22:02:20 pause-645000 dockerd[529]: time="2024-05-05T22:02:20.363626739Z" level=info msg="Starting up"
	May 05 22:02:20 pause-645000 dockerd[529]: time="2024-05-05T22:02:20.364091828Z" level=info msg="containerd not running, starting managed containerd"
	May 05 22:02:20 pause-645000 dockerd[529]: time="2024-05-05T22:02:20.364761996Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=538
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.384020529Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.397540571Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.397605043Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.397667825Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.397703216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.397779520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.397875075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.398023069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.398068231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.398099787Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.398128474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.398212970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.398386313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.399920190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.399971136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.400109694Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.400152898Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.400274644Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.400343933Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.400377825Z" level=info msg="metadata content store policy set" policy=shared
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.444629703Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.444719350Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.444882553Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.444935653Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.445019189Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.445162018Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.445676001Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.445836736Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.445882147Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.445967502Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446005281Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446087839Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446133454Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446170385Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446251921Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446289634Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446730917Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446844525Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446893778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446926968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447008018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447050490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447085883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447162789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447203774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447234734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447311423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447355601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447387242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447463183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447508551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447542247Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447624977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447666691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447696787Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447818875Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447861491Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447960601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448001863Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448119447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448160738Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448250517Z" level=info msg="NRI interface is disabled by configuration."
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448516552Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448605593Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448726553Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448835781Z" level=info msg="containerd successfully booted in 0.066498s"
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.398416761Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.406140571Z" level=info msg="Loading containers: start."
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.524878173Z" level=info msg="Loading containers: done."
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.535589443Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.535715946Z" level=info msg="Daemon has completed initialization"
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.557155610Z" level=info msg="API listen on /var/run/docker.sock"
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.557240778Z" level=info msg="API listen on [::]:2376"
	May 05 22:02:21 pause-645000 systemd[1]: Started Docker Application Container Engine.
	May 05 22:02:22 pause-645000 dockerd[529]: time="2024-05-05T22:02:22.527362017Z" level=info msg="Processing signal 'terminated'"
	May 05 22:02:22 pause-645000 dockerd[529]: time="2024-05-05T22:02:22.528496546Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 05 22:02:22 pause-645000 dockerd[529]: time="2024-05-05T22:02:22.528775088Z" level=info msg="Daemon shutdown complete"
	May 05 22:02:22 pause-645000 dockerd[529]: time="2024-05-05T22:02:22.528805123Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 05 22:02:22 pause-645000 dockerd[529]: time="2024-05-05T22:02:22.528818433Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 05 22:02:22 pause-645000 systemd[1]: Stopping Docker Application Container Engine...
	May 05 22:02:23 pause-645000 systemd[1]: docker.service: Deactivated successfully.
	May 05 22:02:23 pause-645000 systemd[1]: Stopped Docker Application Container Engine.
	May 05 22:02:23 pause-645000 systemd[1]: Starting Docker Application Container Engine...
	May 05 22:02:23 pause-645000 dockerd[792]: time="2024-05-05T22:02:23.578745179Z" level=info msg="Starting up"
	May 05 22:02:23 pause-645000 dockerd[792]: time="2024-05-05T22:02:23.579376599Z" level=info msg="containerd not running, starting managed containerd"
	May 05 22:02:23 pause-645000 dockerd[792]: time="2024-05-05T22:02:23.579925846Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=798
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.599118343Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613605729Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613667649Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613714779Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613778603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613834916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613867679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613988096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.614026339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.614056151Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.614084973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.614121618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.614221653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616326370Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616384222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616520418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616566163Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616606113Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616643075Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616673617Z" level=info msg="metadata content store policy set" policy=shared
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616845013Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616899253Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616934642Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616969174Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617006078Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617063162Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617263909Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617337847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617374147Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617409485Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617442442Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617481236Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617516776Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617548406Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617585906Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617619260Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617659592Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617705591Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617778181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617824706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617860232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617897588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617929593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617960413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617990602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618021790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618053425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618092754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618127877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618158885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618189785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618229274Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618273536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618306963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618337496Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618412842Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618457088Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618489867Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618518723Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618584489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618625800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618656291Z" level=info msg="NRI interface is disabled by configuration."
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618906091Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618998507Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.619059845Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.619098953Z" level=info msg="containerd successfully booted in 0.020696s"
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.620397385Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.630600379Z" level=info msg="Loading containers: start."
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.731990181Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.771218862Z" level=info msg="Loading containers: done."
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.778787006Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.778874190Z" level=info msg="Daemon has completed initialization"
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.795601035Z" level=info msg="API listen on /var/run/docker.sock"
	May 05 22:02:24 pause-645000 systemd[1]: Started Docker Application Container Engine.
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.795843198Z" level=info msg="API listen on [::]:2376"
	May 05 22:04:26 pause-645000 systemd[1]: Stopping Docker Application Container Engine...
	May 05 22:04:26 pause-645000 dockerd[792]: time="2024-05-05T22:04:26.770630419Z" level=info msg="Processing signal 'terminated'"
	May 05 22:04:26 pause-645000 dockerd[792]: time="2024-05-05T22:04:26.771718504Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 05 22:04:26 pause-645000 dockerd[792]: time="2024-05-05T22:04:26.772256290Z" level=info msg="Daemon shutdown complete"
	May 05 22:04:26 pause-645000 dockerd[792]: time="2024-05-05T22:04:26.772299622Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 05 22:04:26 pause-645000 dockerd[792]: time="2024-05-05T22:04:26.772335193Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 05 22:04:27 pause-645000 systemd[1]: docker.service: Deactivated successfully.
	May 05 22:04:27 pause-645000 systemd[1]: Stopped Docker Application Container Engine.
	May 05 22:04:27 pause-645000 systemd[1]: Starting Docker Application Container Engine...
	May 05 22:04:27 pause-645000 dockerd[1110]: time="2024-05-05T22:04:27.830406410Z" level=info msg="Starting up"
	May 05 22:04:27 pause-645000 dockerd[1110]: time="2024-05-05T22:04:27.831086390Z" level=info msg="containerd not running, starting managed containerd"
	May 05 22:04:27 pause-645000 dockerd[1110]: time="2024-05-05T22:04:27.831693770Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1116
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.848641622Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863626041Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863671566Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863699235Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863708712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863727802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863758601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863869102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863904989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863916496Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863923037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863938722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.864020484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865599288Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865639177Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865731864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865767548Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865785377Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865797336Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865805263Z" level=info msg="metadata content store policy set" policy=shared
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865936899Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865986043Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866000571Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866019013Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866031385Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866061570Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866216566Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866284739Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866298285Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866307231Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866315736Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866324028Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866331863Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866344220Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866353675Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866361543Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866372054Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866387053Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866405250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866422316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866433788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866442904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866450488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866459088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866466532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866474955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866485542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866494979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866502166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866509752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866519466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866530829Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866543892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866560549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866571171Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866598589Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866631721Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866644061Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866653729Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866722356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866733559Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866740572Z" level=info msg="NRI interface is disabled by configuration."
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866863957Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866919541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866949147Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866963090Z" level=info msg="containerd successfully booted in 0.018941s"
	May 05 22:04:28 pause-645000 dockerd[1110]: time="2024-05-05T22:04:28.862930247Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 05 22:04:28 pause-645000 dockerd[1110]: time="2024-05-05T22:04:28.897827673Z" level=info msg="Loading containers: start."
	May 05 22:04:29 pause-645000 dockerd[1110]: time="2024-05-05T22:04:29.000940016Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 05 22:04:29 pause-645000 dockerd[1110]: time="2024-05-05T22:04:29.046836611Z" level=info msg="Loading containers: done."
	May 05 22:04:29 pause-645000 dockerd[1110]: time="2024-05-05T22:04:29.058848385Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 05 22:04:29 pause-645000 dockerd[1110]: time="2024-05-05T22:04:29.059056322Z" level=info msg="Daemon has completed initialization"
	May 05 22:04:29 pause-645000 dockerd[1110]: time="2024-05-05T22:04:29.091875642Z" level=info msg="API listen on /var/run/docker.sock"
	May 05 22:04:29 pause-645000 dockerd[1110]: time="2024-05-05T22:04:29.092045005Z" level=info msg="API listen on [::]:2376"
	May 05 22:04:29 pause-645000 systemd[1]: Started Docker Application Container Engine.
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010851574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010910340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010922426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010986207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010812601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010903491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010913454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010976004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.062379694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.062653120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.062726257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.062883580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.067080693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.067236424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.067373533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.067561449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.193843683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.194413365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.198893407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.199112992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.243909357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.244051841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.244079411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.244165041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.272128853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.272508363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.272594272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.272836137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.282047465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.285848940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.285993883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.286157706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.255612977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.255690579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.255704073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.255886366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.267254007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.267303052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.267315045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.267382645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.268948418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.269024549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.269039404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.269109778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.447229918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.447355916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.447431412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.447595902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.744244095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.744434127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.744528482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.745146443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.774711213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.775321916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.775438731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.775627095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:05:04 pause-645000 dockerd[1110]: time="2024-05-05T22:05:04.940523205Z" level=info msg="ignoring event" container=77aad314dd0d47e395d4814bb10527943dea2d28499c37aa5c70a2e620465e8c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:04 pause-645000 dockerd[1116]: time="2024-05-05T22:05:04.940955922Z" level=info msg="shim disconnected" id=77aad314dd0d47e395d4814bb10527943dea2d28499c37aa5c70a2e620465e8c namespace=moby
	May 05 22:05:04 pause-645000 dockerd[1116]: time="2024-05-05T22:05:04.941308864Z" level=warning msg="cleaning up after shim disconnected" id=77aad314dd0d47e395d4814bb10527943dea2d28499c37aa5c70a2e620465e8c namespace=moby
	May 05 22:05:04 pause-645000 dockerd[1116]: time="2024-05-05T22:05:04.941351588Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:05 pause-645000 dockerd[1110]: time="2024-05-05T22:05:05.018497838Z" level=info msg="ignoring event" container=34f5423bebeed2af72eab344742df566a251ff88a9bd6611cb57929af45f3ac1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:05 pause-645000 dockerd[1116]: time="2024-05-05T22:05:05.018615326Z" level=info msg="shim disconnected" id=34f5423bebeed2af72eab344742df566a251ff88a9bd6611cb57929af45f3ac1 namespace=moby
	May 05 22:05:05 pause-645000 dockerd[1116]: time="2024-05-05T22:05:05.018668781Z" level=warning msg="cleaning up after shim disconnected" id=34f5423bebeed2af72eab344742df566a251ff88a9bd6611cb57929af45f3ac1 namespace=moby
	May 05 22:05:05 pause-645000 dockerd[1116]: time="2024-05-05T22:05:05.018677334Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1110]: time="2024-05-05T22:05:37.857797667Z" level=info msg="Processing signal 'terminated'"
	May 05 22:05:37 pause-645000 systemd[1]: Stopping Docker Application Container Engine...
	May 05 22:05:37 pause-645000 dockerd[1110]: time="2024-05-05T22:05:37.910084844Z" level=info msg="ignoring event" container=e7a49406f44ad49a3e6cfa402b9bc81637fd2739582501c0267e9168e68f3705 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.909940682Z" level=info msg="shim disconnected" id=e7a49406f44ad49a3e6cfa402b9bc81637fd2739582501c0267e9168e68f3705 namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.910575712Z" level=warning msg="cleaning up after shim disconnected" id=e7a49406f44ad49a3e6cfa402b9bc81637fd2739582501c0267e9168e68f3705 namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.910618143Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1110]: time="2024-05-05T22:05:37.918168394Z" level=info msg="ignoring event" container=6b9600e0ddc958a0663f061b4aadd352f1618bdf4be745648f71b62a76788d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.919043972Z" level=info msg="shim disconnected" id=6b9600e0ddc958a0663f061b4aadd352f1618bdf4be745648f71b62a76788d99 namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.919095717Z" level=warning msg="cleaning up after shim disconnected" id=6b9600e0ddc958a0663f061b4aadd352f1618bdf4be745648f71b62a76788d99 namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.919105638Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1110]: time="2024-05-05T22:05:37.937586306Z" level=info msg="ignoring event" container=fdae7f7c221ad4e7c1b696038c9d574c54620e3c7e825aee98195d7ae2c950e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.937925946Z" level=info msg="shim disconnected" id=fdae7f7c221ad4e7c1b696038c9d574c54620e3c7e825aee98195d7ae2c950e8 namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.937978401Z" level=warning msg="cleaning up after shim disconnected" id=fdae7f7c221ad4e7c1b696038c9d574c54620e3c7e825aee98195d7ae2c950e8 namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.937987855Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.001904538Z" level=info msg="ignoring event" container=0a58cfaac1b1f9bfb67658f76b5f5cdb5977a23bbccae015f5b4f7379deffce4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.002397918Z" level=info msg="shim disconnected" id=0a58cfaac1b1f9bfb67658f76b5f5cdb5977a23bbccae015f5b4f7379deffce4 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.002502025Z" level=warning msg="cleaning up after shim disconnected" id=0a58cfaac1b1f9bfb67658f76b5f5cdb5977a23bbccae015f5b4f7379deffce4 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.002535269Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.038301522Z" level=info msg="shim disconnected" id=d50de234386686f01deada134115847c646b16be2634a86ff9c1f044ceec8ff7 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.038422169Z" level=warning msg="cleaning up after shim disconnected" id=d50de234386686f01deada134115847c646b16be2634a86ff9c1f044ceec8ff7 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.038467263Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.042174084Z" level=info msg="ignoring event" container=d50de234386686f01deada134115847c646b16be2634a86ff9c1f044ceec8ff7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.042929889Z" level=info msg="shim disconnected" id=a68a1d5c7bbcd580a8c9b137d7f986e0fdc5b2d047351166baac98684298fc11 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.043006374Z" level=warning msg="cleaning up after shim disconnected" id=a68a1d5c7bbcd580a8c9b137d7f986e0fdc5b2d047351166baac98684298fc11 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.043015840Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.044857263Z" level=info msg="ignoring event" container=a68a1d5c7bbcd580a8c9b137d7f986e0fdc5b2d047351166baac98684298fc11 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.045909183Z" level=info msg="ignoring event" container=c8d70a309dc6f3dfcda307d8ef92738b73a9ddcfa42f662e942c00c4481f95c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.046399023Z" level=info msg="shim disconnected" id=c8d70a309dc6f3dfcda307d8ef92738b73a9ddcfa42f662e942c00c4481f95c3 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.046449147Z" level=warning msg="cleaning up after shim disconnected" id=c8d70a309dc6f3dfcda307d8ef92738b73a9ddcfa42f662e942c00c4481f95c3 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.046458601Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.047947573Z" level=info msg="ignoring event" container=835ce6befab749b39e112b59463304e7cefa1365d8699ad58427c3a62ad90228 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.048506864Z" level=info msg="shim disconnected" id=835ce6befab749b39e112b59463304e7cefa1365d8699ad58427c3a62ad90228 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.048637283Z" level=warning msg="cleaning up after shim disconnected" id=835ce6befab749b39e112b59463304e7cefa1365d8699ad58427c3a62ad90228 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.048681868Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.060093311Z" level=info msg="ignoring event" container=0322cf6c173f3331ce85b637d9762cbc021bb618478a7a9e8177b70c2b5a7384 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.061060116Z" level=info msg="ignoring event" container=6153dd05cf3658796650593a5bad8cfdbeb26a899b7587af4731dd0fb29b0fd5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.062027102Z" level=info msg="shim disconnected" id=6153dd05cf3658796650593a5bad8cfdbeb26a899b7587af4731dd0fb29b0fd5 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.062123279Z" level=warning msg="cleaning up after shim disconnected" id=6153dd05cf3658796650593a5bad8cfdbeb26a899b7587af4731dd0fb29b0fd5 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.062133113Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.063205216Z" level=info msg="shim disconnected" id=0322cf6c173f3331ce85b637d9762cbc021bb618478a7a9e8177b70c2b5a7384 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.063297549Z" level=warning msg="cleaning up after shim disconnected" id=0322cf6c173f3331ce85b637d9762cbc021bb618478a7a9e8177b70c2b5a7384 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.063309660Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:42 pause-645000 dockerd[1110]: time="2024-05-05T22:05:42.900020066Z" level=info msg="ignoring event" container=533a1d124c2cd6a277f5059f0d0466e11c179d3de29f2577e620e6e93ae259f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:42 pause-645000 dockerd[1116]: time="2024-05-05T22:05:42.899915356Z" level=info msg="shim disconnected" id=533a1d124c2cd6a277f5059f0d0466e11c179d3de29f2577e620e6e93ae259f7 namespace=moby
	May 05 22:05:42 pause-645000 dockerd[1116]: time="2024-05-05T22:05:42.900164493Z" level=warning msg="cleaning up after shim disconnected" id=533a1d124c2cd6a277f5059f0d0466e11c179d3de29f2577e620e6e93ae259f7 namespace=moby
	May 05 22:05:42 pause-645000 dockerd[1116]: time="2024-05-05T22:05:42.900173728Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:47 pause-645000 dockerd[1110]: time="2024-05-05T22:05:47.951615452Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=fc1c179f091d4a9fbbcdcf10285455d2134c99be7145906ff5dda6f4222d6611
	May 05 22:05:47 pause-645000 dockerd[1110]: time="2024-05-05T22:05:47.973609005Z" level=info msg="ignoring event" container=fc1c179f091d4a9fbbcdcf10285455d2134c99be7145906ff5dda6f4222d6611 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:47 pause-645000 dockerd[1116]: time="2024-05-05T22:05:47.974154290Z" level=info msg="shim disconnected" id=fc1c179f091d4a9fbbcdcf10285455d2134c99be7145906ff5dda6f4222d6611 namespace=moby
	May 05 22:05:47 pause-645000 dockerd[1116]: time="2024-05-05T22:05:47.974222343Z" level=warning msg="cleaning up after shim disconnected" id=fc1c179f091d4a9fbbcdcf10285455d2134c99be7145906ff5dda6f4222d6611 namespace=moby
	May 05 22:05:47 pause-645000 dockerd[1116]: time="2024-05-05T22:05:47.974231664Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:47 pause-645000 dockerd[1110]: time="2024-05-05T22:05:47.997856465Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 05 22:05:47 pause-645000 dockerd[1110]: time="2024-05-05T22:05:47.998074237Z" level=info msg="Daemon shutdown complete"
	May 05 22:05:47 pause-645000 dockerd[1110]: time="2024-05-05T22:05:47.998160356Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 05 22:05:47 pause-645000 dockerd[1110]: time="2024-05-05T22:05:47.998161134Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 05 22:05:49 pause-645000 systemd[1]: docker.service: Deactivated successfully.
	May 05 22:05:49 pause-645000 systemd[1]: Stopped Docker Application Container Engine.
	May 05 22:05:49 pause-645000 systemd[1]: docker.service: Consumed 2.423s CPU time.
	May 05 22:05:49 pause-645000 systemd[1]: Starting Docker Application Container Engine...
	May 05 22:05:49 pause-645000 dockerd[3389]: time="2024-05-05T22:05:49.044774089Z" level=info msg="Starting up"
	May 05 22:06:49 pause-645000 dockerd[3389]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 05 22:06:49 pause-645000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 05 22:06:49 pause-645000 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 05 22:06:49 pause-645000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 05 22:02:20 pause-645000 systemd[1]: Starting Docker Application Container Engine...
	May 05 22:02:20 pause-645000 dockerd[529]: time="2024-05-05T22:02:20.363626739Z" level=info msg="Starting up"
	May 05 22:02:20 pause-645000 dockerd[529]: time="2024-05-05T22:02:20.364091828Z" level=info msg="containerd not running, starting managed containerd"
	May 05 22:02:20 pause-645000 dockerd[529]: time="2024-05-05T22:02:20.364761996Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=538
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.384020529Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.397540571Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.397605043Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.397667825Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.397703216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.397779520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.397875075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.398023069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.398068231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.398099787Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.398128474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.398212970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.398386313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.399920190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.399971136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.400109694Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.400152898Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.400274644Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.400343933Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.400377825Z" level=info msg="metadata content store policy set" policy=shared
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.444629703Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.444719350Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.444882553Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.444935653Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.445019189Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.445162018Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.445676001Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.445836736Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.445882147Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.445967502Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446005281Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446087839Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446133454Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446170385Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446251921Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446289634Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446730917Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446844525Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446893778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446926968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447008018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447050490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447085883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447162789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447203774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447234734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447311423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447355601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447387242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447463183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447508551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447542247Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447624977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447666691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447696787Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447818875Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447861491Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447960601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448001863Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448119447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448160738Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448250517Z" level=info msg="NRI interface is disabled by configuration."
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448516552Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448605593Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448726553Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448835781Z" level=info msg="containerd successfully booted in 0.066498s"
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.398416761Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.406140571Z" level=info msg="Loading containers: start."
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.524878173Z" level=info msg="Loading containers: done."
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.535589443Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.535715946Z" level=info msg="Daemon has completed initialization"
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.557155610Z" level=info msg="API listen on /var/run/docker.sock"
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.557240778Z" level=info msg="API listen on [::]:2376"
	May 05 22:02:21 pause-645000 systemd[1]: Started Docker Application Container Engine.
	May 05 22:02:22 pause-645000 dockerd[529]: time="2024-05-05T22:02:22.527362017Z" level=info msg="Processing signal 'terminated'"
	May 05 22:02:22 pause-645000 dockerd[529]: time="2024-05-05T22:02:22.528496546Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 05 22:02:22 pause-645000 dockerd[529]: time="2024-05-05T22:02:22.528775088Z" level=info msg="Daemon shutdown complete"
	May 05 22:02:22 pause-645000 dockerd[529]: time="2024-05-05T22:02:22.528805123Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 05 22:02:22 pause-645000 dockerd[529]: time="2024-05-05T22:02:22.528818433Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 05 22:02:22 pause-645000 systemd[1]: Stopping Docker Application Container Engine...
	May 05 22:02:23 pause-645000 systemd[1]: docker.service: Deactivated successfully.
	May 05 22:02:23 pause-645000 systemd[1]: Stopped Docker Application Container Engine.
	May 05 22:02:23 pause-645000 systemd[1]: Starting Docker Application Container Engine...
	May 05 22:02:23 pause-645000 dockerd[792]: time="2024-05-05T22:02:23.578745179Z" level=info msg="Starting up"
	May 05 22:02:23 pause-645000 dockerd[792]: time="2024-05-05T22:02:23.579376599Z" level=info msg="containerd not running, starting managed containerd"
	May 05 22:02:23 pause-645000 dockerd[792]: time="2024-05-05T22:02:23.579925846Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=798
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.599118343Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613605729Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613667649Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613714779Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613778603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613834916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613867679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613988096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.614026339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.614056151Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.614084973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.614121618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.614221653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616326370Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616384222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616520418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616566163Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616606113Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616643075Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616673617Z" level=info msg="metadata content store policy set" policy=shared
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616845013Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616899253Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616934642Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616969174Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617006078Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617063162Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617263909Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617337847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617374147Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617409485Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617442442Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617481236Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617516776Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617548406Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617585906Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617619260Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617659592Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617705591Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617778181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617824706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617860232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617897588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617929593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617960413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617990602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618021790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618053425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618092754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618127877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618158885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618189785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618229274Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618273536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618306963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618337496Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618412842Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618457088Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618489867Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618518723Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618584489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618625800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618656291Z" level=info msg="NRI interface is disabled by configuration."
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618906091Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618998507Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.619059845Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.619098953Z" level=info msg="containerd successfully booted in 0.020696s"
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.620397385Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.630600379Z" level=info msg="Loading containers: start."
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.731990181Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.771218862Z" level=info msg="Loading containers: done."
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.778787006Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.778874190Z" level=info msg="Daemon has completed initialization"
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.795601035Z" level=info msg="API listen on /var/run/docker.sock"
	May 05 22:02:24 pause-645000 systemd[1]: Started Docker Application Container Engine.
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.795843198Z" level=info msg="API listen on [::]:2376"
	May 05 22:04:26 pause-645000 systemd[1]: Stopping Docker Application Container Engine...
	May 05 22:04:26 pause-645000 dockerd[792]: time="2024-05-05T22:04:26.770630419Z" level=info msg="Processing signal 'terminated'"
	May 05 22:04:26 pause-645000 dockerd[792]: time="2024-05-05T22:04:26.771718504Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 05 22:04:26 pause-645000 dockerd[792]: time="2024-05-05T22:04:26.772256290Z" level=info msg="Daemon shutdown complete"
	May 05 22:04:26 pause-645000 dockerd[792]: time="2024-05-05T22:04:26.772299622Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 05 22:04:26 pause-645000 dockerd[792]: time="2024-05-05T22:04:26.772335193Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 05 22:04:27 pause-645000 systemd[1]: docker.service: Deactivated successfully.
	May 05 22:04:27 pause-645000 systemd[1]: Stopped Docker Application Container Engine.
	May 05 22:04:27 pause-645000 systemd[1]: Starting Docker Application Container Engine...
	May 05 22:04:27 pause-645000 dockerd[1110]: time="2024-05-05T22:04:27.830406410Z" level=info msg="Starting up"
	May 05 22:04:27 pause-645000 dockerd[1110]: time="2024-05-05T22:04:27.831086390Z" level=info msg="containerd not running, starting managed containerd"
	May 05 22:04:27 pause-645000 dockerd[1110]: time="2024-05-05T22:04:27.831693770Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1116
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.848641622Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863626041Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863671566Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863699235Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863708712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863727802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863758601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863869102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863904989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863916496Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863923037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863938722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.864020484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865599288Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865639177Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865731864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865767548Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865785377Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865797336Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865805263Z" level=info msg="metadata content store policy set" policy=shared
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865936899Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865986043Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866000571Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866019013Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866031385Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866061570Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866216566Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866284739Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866298285Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866307231Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866315736Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866324028Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866331863Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866344220Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866353675Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866361543Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866372054Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866387053Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866405250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866422316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866433788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866442904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866450488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866459088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866466532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866474955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866485542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866494979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866502166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866509752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866519466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866530829Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866543892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866560549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866571171Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866598589Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866631721Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866644061Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866653729Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866722356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866733559Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866740572Z" level=info msg="NRI interface is disabled by configuration."
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866863957Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866919541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866949147Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866963090Z" level=info msg="containerd successfully booted in 0.018941s"
	May 05 22:04:28 pause-645000 dockerd[1110]: time="2024-05-05T22:04:28.862930247Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 05 22:04:28 pause-645000 dockerd[1110]: time="2024-05-05T22:04:28.897827673Z" level=info msg="Loading containers: start."
	May 05 22:04:29 pause-645000 dockerd[1110]: time="2024-05-05T22:04:29.000940016Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 05 22:04:29 pause-645000 dockerd[1110]: time="2024-05-05T22:04:29.046836611Z" level=info msg="Loading containers: done."
	May 05 22:04:29 pause-645000 dockerd[1110]: time="2024-05-05T22:04:29.058848385Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 05 22:04:29 pause-645000 dockerd[1110]: time="2024-05-05T22:04:29.059056322Z" level=info msg="Daemon has completed initialization"
	May 05 22:04:29 pause-645000 dockerd[1110]: time="2024-05-05T22:04:29.091875642Z" level=info msg="API listen on /var/run/docker.sock"
	May 05 22:04:29 pause-645000 dockerd[1110]: time="2024-05-05T22:04:29.092045005Z" level=info msg="API listen on [::]:2376"
	May 05 22:04:29 pause-645000 systemd[1]: Started Docker Application Container Engine.
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010851574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010910340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010922426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010986207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010812601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010903491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010913454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010976004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.062379694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.062653120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.062726257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.062883580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.067080693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.067236424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.067373533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.067561449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.193843683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.194413365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.198893407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.199112992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.243909357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.244051841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.244079411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.244165041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.272128853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.272508363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.272594272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.272836137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.282047465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.285848940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.285993883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.286157706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.255612977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.255690579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.255704073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.255886366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.267254007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.267303052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.267315045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.267382645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.268948418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.269024549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.269039404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.269109778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.447229918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.447355916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.447431412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.447595902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.744244095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.744434127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.744528482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.745146443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.774711213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.775321916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.775438731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.775627095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:05:04 pause-645000 dockerd[1110]: time="2024-05-05T22:05:04.940523205Z" level=info msg="ignoring event" container=77aad314dd0d47e395d4814bb10527943dea2d28499c37aa5c70a2e620465e8c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:04 pause-645000 dockerd[1116]: time="2024-05-05T22:05:04.940955922Z" level=info msg="shim disconnected" id=77aad314dd0d47e395d4814bb10527943dea2d28499c37aa5c70a2e620465e8c namespace=moby
	May 05 22:05:04 pause-645000 dockerd[1116]: time="2024-05-05T22:05:04.941308864Z" level=warning msg="cleaning up after shim disconnected" id=77aad314dd0d47e395d4814bb10527943dea2d28499c37aa5c70a2e620465e8c namespace=moby
	May 05 22:05:04 pause-645000 dockerd[1116]: time="2024-05-05T22:05:04.941351588Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:05 pause-645000 dockerd[1110]: time="2024-05-05T22:05:05.018497838Z" level=info msg="ignoring event" container=34f5423bebeed2af72eab344742df566a251ff88a9bd6611cb57929af45f3ac1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:05 pause-645000 dockerd[1116]: time="2024-05-05T22:05:05.018615326Z" level=info msg="shim disconnected" id=34f5423bebeed2af72eab344742df566a251ff88a9bd6611cb57929af45f3ac1 namespace=moby
	May 05 22:05:05 pause-645000 dockerd[1116]: time="2024-05-05T22:05:05.018668781Z" level=warning msg="cleaning up after shim disconnected" id=34f5423bebeed2af72eab344742df566a251ff88a9bd6611cb57929af45f3ac1 namespace=moby
	May 05 22:05:05 pause-645000 dockerd[1116]: time="2024-05-05T22:05:05.018677334Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1110]: time="2024-05-05T22:05:37.857797667Z" level=info msg="Processing signal 'terminated'"
	May 05 22:05:37 pause-645000 systemd[1]: Stopping Docker Application Container Engine...
	May 05 22:05:37 pause-645000 dockerd[1110]: time="2024-05-05T22:05:37.910084844Z" level=info msg="ignoring event" container=e7a49406f44ad49a3e6cfa402b9bc81637fd2739582501c0267e9168e68f3705 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.909940682Z" level=info msg="shim disconnected" id=e7a49406f44ad49a3e6cfa402b9bc81637fd2739582501c0267e9168e68f3705 namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.910575712Z" level=warning msg="cleaning up after shim disconnected" id=e7a49406f44ad49a3e6cfa402b9bc81637fd2739582501c0267e9168e68f3705 namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.910618143Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1110]: time="2024-05-05T22:05:37.918168394Z" level=info msg="ignoring event" container=6b9600e0ddc958a0663f061b4aadd352f1618bdf4be745648f71b62a76788d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.919043972Z" level=info msg="shim disconnected" id=6b9600e0ddc958a0663f061b4aadd352f1618bdf4be745648f71b62a76788d99 namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.919095717Z" level=warning msg="cleaning up after shim disconnected" id=6b9600e0ddc958a0663f061b4aadd352f1618bdf4be745648f71b62a76788d99 namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.919105638Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1110]: time="2024-05-05T22:05:37.937586306Z" level=info msg="ignoring event" container=fdae7f7c221ad4e7c1b696038c9d574c54620e3c7e825aee98195d7ae2c950e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.937925946Z" level=info msg="shim disconnected" id=fdae7f7c221ad4e7c1b696038c9d574c54620e3c7e825aee98195d7ae2c950e8 namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.937978401Z" level=warning msg="cleaning up after shim disconnected" id=fdae7f7c221ad4e7c1b696038c9d574c54620e3c7e825aee98195d7ae2c950e8 namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.937987855Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.001904538Z" level=info msg="ignoring event" container=0a58cfaac1b1f9bfb67658f76b5f5cdb5977a23bbccae015f5b4f7379deffce4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.002397918Z" level=info msg="shim disconnected" id=0a58cfaac1b1f9bfb67658f76b5f5cdb5977a23bbccae015f5b4f7379deffce4 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.002502025Z" level=warning msg="cleaning up after shim disconnected" id=0a58cfaac1b1f9bfb67658f76b5f5cdb5977a23bbccae015f5b4f7379deffce4 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.002535269Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.038301522Z" level=info msg="shim disconnected" id=d50de234386686f01deada134115847c646b16be2634a86ff9c1f044ceec8ff7 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.038422169Z" level=warning msg="cleaning up after shim disconnected" id=d50de234386686f01deada134115847c646b16be2634a86ff9c1f044ceec8ff7 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.038467263Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.042174084Z" level=info msg="ignoring event" container=d50de234386686f01deada134115847c646b16be2634a86ff9c1f044ceec8ff7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.042929889Z" level=info msg="shim disconnected" id=a68a1d5c7bbcd580a8c9b137d7f986e0fdc5b2d047351166baac98684298fc11 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.043006374Z" level=warning msg="cleaning up after shim disconnected" id=a68a1d5c7bbcd580a8c9b137d7f986e0fdc5b2d047351166baac98684298fc11 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.043015840Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.044857263Z" level=info msg="ignoring event" container=a68a1d5c7bbcd580a8c9b137d7f986e0fdc5b2d047351166baac98684298fc11 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.045909183Z" level=info msg="ignoring event" container=c8d70a309dc6f3dfcda307d8ef92738b73a9ddcfa42f662e942c00c4481f95c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.046399023Z" level=info msg="shim disconnected" id=c8d70a309dc6f3dfcda307d8ef92738b73a9ddcfa42f662e942c00c4481f95c3 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.046449147Z" level=warning msg="cleaning up after shim disconnected" id=c8d70a309dc6f3dfcda307d8ef92738b73a9ddcfa42f662e942c00c4481f95c3 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.046458601Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.047947573Z" level=info msg="ignoring event" container=835ce6befab749b39e112b59463304e7cefa1365d8699ad58427c3a62ad90228 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.048506864Z" level=info msg="shim disconnected" id=835ce6befab749b39e112b59463304e7cefa1365d8699ad58427c3a62ad90228 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.048637283Z" level=warning msg="cleaning up after shim disconnected" id=835ce6befab749b39e112b59463304e7cefa1365d8699ad58427c3a62ad90228 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.048681868Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.060093311Z" level=info msg="ignoring event" container=0322cf6c173f3331ce85b637d9762cbc021bb618478a7a9e8177b70c2b5a7384 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.061060116Z" level=info msg="ignoring event" container=6153dd05cf3658796650593a5bad8cfdbeb26a899b7587af4731dd0fb29b0fd5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.062027102Z" level=info msg="shim disconnected" id=6153dd05cf3658796650593a5bad8cfdbeb26a899b7587af4731dd0fb29b0fd5 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.062123279Z" level=warning msg="cleaning up after shim disconnected" id=6153dd05cf3658796650593a5bad8cfdbeb26a899b7587af4731dd0fb29b0fd5 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.062133113Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.063205216Z" level=info msg="shim disconnected" id=0322cf6c173f3331ce85b637d9762cbc021bb618478a7a9e8177b70c2b5a7384 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.063297549Z" level=warning msg="cleaning up after shim disconnected" id=0322cf6c173f3331ce85b637d9762cbc021bb618478a7a9e8177b70c2b5a7384 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.063309660Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:42 pause-645000 dockerd[1110]: time="2024-05-05T22:05:42.900020066Z" level=info msg="ignoring event" container=533a1d124c2cd6a277f5059f0d0466e11c179d3de29f2577e620e6e93ae259f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:42 pause-645000 dockerd[1116]: time="2024-05-05T22:05:42.899915356Z" level=info msg="shim disconnected" id=533a1d124c2cd6a277f5059f0d0466e11c179d3de29f2577e620e6e93ae259f7 namespace=moby
	May 05 22:05:42 pause-645000 dockerd[1116]: time="2024-05-05T22:05:42.900164493Z" level=warning msg="cleaning up after shim disconnected" id=533a1d124c2cd6a277f5059f0d0466e11c179d3de29f2577e620e6e93ae259f7 namespace=moby
	May 05 22:05:42 pause-645000 dockerd[1116]: time="2024-05-05T22:05:42.900173728Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:47 pause-645000 dockerd[1110]: time="2024-05-05T22:05:47.951615452Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=fc1c179f091d4a9fbbcdcf10285455d2134c99be7145906ff5dda6f4222d6611
	May 05 22:05:47 pause-645000 dockerd[1110]: time="2024-05-05T22:05:47.973609005Z" level=info msg="ignoring event" container=fc1c179f091d4a9fbbcdcf10285455d2134c99be7145906ff5dda6f4222d6611 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:47 pause-645000 dockerd[1116]: time="2024-05-05T22:05:47.974154290Z" level=info msg="shim disconnected" id=fc1c179f091d4a9fbbcdcf10285455d2134c99be7145906ff5dda6f4222d6611 namespace=moby
	May 05 22:05:47 pause-645000 dockerd[1116]: time="2024-05-05T22:05:47.974222343Z" level=warning msg="cleaning up after shim disconnected" id=fc1c179f091d4a9fbbcdcf10285455d2134c99be7145906ff5dda6f4222d6611 namespace=moby
	May 05 22:05:47 pause-645000 dockerd[1116]: time="2024-05-05T22:05:47.974231664Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:47 pause-645000 dockerd[1110]: time="2024-05-05T22:05:47.997856465Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 05 22:05:47 pause-645000 dockerd[1110]: time="2024-05-05T22:05:47.998074237Z" level=info msg="Daemon shutdown complete"
	May 05 22:05:47 pause-645000 dockerd[1110]: time="2024-05-05T22:05:47.998160356Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 05 22:05:47 pause-645000 dockerd[1110]: time="2024-05-05T22:05:47.998161134Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 05 22:05:49 pause-645000 systemd[1]: docker.service: Deactivated successfully.
	May 05 22:05:49 pause-645000 systemd[1]: Stopped Docker Application Container Engine.
	May 05 22:05:49 pause-645000 systemd[1]: docker.service: Consumed 2.423s CPU time.
	May 05 22:05:49 pause-645000 systemd[1]: Starting Docker Application Container Engine...
	May 05 22:05:49 pause-645000 dockerd[3389]: time="2024-05-05T22:05:49.044774089Z" level=info msg="Starting up"
	May 05 22:06:49 pause-645000 dockerd[3389]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 05 22:06:49 pause-645000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 05 22:06:49 pause-645000 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 05 22:06:49 pause-645000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0505 15:06:48.888650   58814 out.go:239] * 
	* 
	W0505 15:06:48.889776   58814 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 15:06:48.951976   58814 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:94: failed to second start a running minikube with args: "out/minikube-darwin-amd64 start -p pause-645000 --alsologtostderr -v=1 --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-645000 -n pause-645000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-645000 -n pause-645000: exit status 2 (160.756423ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p pause-645000 logs -n 25
E0505 15:07:31.535825   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 15:08:06.952569   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 15:08:19.428856   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
E0505 15:08:23.893509   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 15:08:47.125928   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-645000 logs -n 25: (2m0.497884556s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-501000      | minikube                  | jenkins | v1.26.0 | 05 May 24 15:00 PDT | 05 May 24 15:01 PDT |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --vm-driver=hyperkit           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-228000 stop    | minikube                  | jenkins | v1.26.0 | 05 May 24 15:01 PDT | 05 May 24 15:01 PDT |
	| start   | -p stopped-upgrade-228000      | stopped-upgrade-228000    | jenkins | v1.33.0 | 05 May 24 15:01 PDT | 05 May 24 15:01 PDT |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=hyperkit              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-501000      | running-upgrade-501000    | jenkins | v1.33.0 | 05 May 24 15:01 PDT | 05 May 24 15:02 PDT |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=hyperkit              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-228000      | stopped-upgrade-228000    | jenkins | v1.33.0 | 05 May 24 15:01 PDT | 05 May 24 15:02 PDT |
	| start   | -p pause-645000 --memory=2048  | pause-645000              | jenkins | v1.33.0 | 05 May 24 15:02 PDT | 05 May 24 15:05 PDT |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=hyperkit   |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-501000      | running-upgrade-501000    | jenkins | v1.33.0 | 05 May 24 15:02 PDT | 05 May 24 15:02 PDT |
	| start   | -p NoKubernetes-848000         | NoKubernetes-848000       | jenkins | v1.33.0 | 05 May 24 15:02 PDT |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=hyperkit              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-848000         | NoKubernetes-848000       | jenkins | v1.33.0 | 05 May 24 15:02 PDT | 05 May 24 15:03 PDT |
	|         | --driver=hyperkit              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-848000         | NoKubernetes-848000       | jenkins | v1.33.0 | 05 May 24 15:03 PDT | 05 May 24 15:03 PDT |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=hyperkit              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-848000         | NoKubernetes-848000       | jenkins | v1.33.0 | 05 May 24 15:03 PDT | 05 May 24 15:03 PDT |
	| start   | -p NoKubernetes-848000         | NoKubernetes-848000       | jenkins | v1.33.0 | 05 May 24 15:03 PDT | 05 May 24 15:03 PDT |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=hyperkit              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-848000 sudo    | NoKubernetes-848000       | jenkins | v1.33.0 | 05 May 24 15:03 PDT |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-848000         | NoKubernetes-848000       | jenkins | v1.33.0 | 05 May 24 15:03 PDT | 05 May 24 15:03 PDT |
	| start   | -p NoKubernetes-848000         | NoKubernetes-848000       | jenkins | v1.33.0 | 05 May 24 15:03 PDT | 05 May 24 15:04 PDT |
	|         | --driver=hyperkit              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-848000 sudo    | NoKubernetes-848000       | jenkins | v1.33.0 | 05 May 24 15:04 PDT |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-848000         | NoKubernetes-848000       | jenkins | v1.33.0 | 05 May 24 15:04 PDT | 05 May 24 15:04 PDT |
	| start   | -p force-systemd-flag-033000   | force-systemd-flag-033000 | jenkins | v1.33.0 | 05 May 24 15:04 PDT | 05 May 24 15:04 PDT |
	|         | --memory=2048 --force-systemd  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	|         | --driver=hyperkit              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-033000      | force-systemd-flag-033000 | jenkins | v1.33.0 | 05 May 24 15:04 PDT | 05 May 24 15:04 PDT |
	|         | ssh docker info --format       |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-033000   | force-systemd-flag-033000 | jenkins | v1.33.0 | 05 May 24 15:04 PDT | 05 May 24 15:04 PDT |
	| start   | -p force-systemd-env-033000    | force-systemd-env-033000  | jenkins | v1.33.0 | 05 May 24 15:05 PDT | 05 May 24 15:05 PDT |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	|         | --driver=hyperkit              |                           |         |         |                     |                     |
	| start   | -p pause-645000                | pause-645000              | jenkins | v1.33.0 | 05 May 24 15:05 PDT |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=hyperkit              |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-033000       | force-systemd-env-033000  | jenkins | v1.33.0 | 05 May 24 15:05 PDT | 05 May 24 15:05 PDT |
	|         | ssh docker info --format       |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-033000    | force-systemd-env-033000  | jenkins | v1.33.0 | 05 May 24 15:05 PDT | 05 May 24 15:05 PDT |
	| start   | -p cert-expiration-724000      | cert-expiration-724000    | jenkins | v1.33.0 | 05 May 24 15:05 PDT | 05 May 24 15:06 PDT |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --cert-expiration=3m           |                           |         |         |                     |                     |
	|         | --driver=hyperkit              |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 15:05:52
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 15:05:52.951912   58847 out.go:291] Setting OutFile to fd 1 ...
	I0505 15:05:52.952124   58847 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 15:05:52.952127   58847 out.go:304] Setting ErrFile to fd 2...
	I0505 15:05:52.952129   58847 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 15:05:52.952332   58847 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
	I0505 15:05:52.954092   58847 out.go:298] Setting JSON to false
	I0505 15:05:52.976392   58847 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":21923,"bootTime":1714924829,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0505 15:05:52.976480   58847 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 15:05:52.998027   58847 out.go:177] * [cert-expiration-724000] minikube v1.33.0 on Darwin 14.4.1
	I0505 15:05:53.048800   58847 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 15:05:53.048860   58847 notify.go:220] Checking for updates...
	I0505 15:05:53.093117   58847 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 15:05:53.114785   58847 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0505 15:05:53.135671   58847 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 15:05:53.156645   58847 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
	I0505 15:05:53.177840   58847 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 15:05:53.199499   58847 config.go:182] Loaded profile config "pause-645000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 15:05:53.199633   58847 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 15:05:53.228745   58847 out.go:177] * Using the hyperkit driver based on user configuration
	I0505 15:05:53.270703   58847 start.go:297] selected driver: hyperkit
	I0505 15:05:53.270719   58847 start.go:901] validating driver "hyperkit" against <nil>
	I0505 15:05:53.270741   58847 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 15:05:53.275215   58847 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 15:05:53.275324   58847 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18602-53665/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0505 15:05:53.283575   58847 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0505 15:05:53.287484   58847 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 15:05:53.287502   58847 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0505 15:05:53.287534   58847 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 15:05:53.287740   58847 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0505 15:05:53.287787   58847 cni.go:84] Creating CNI manager for ""
	I0505 15:05:53.287800   58847 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 15:05:53.287805   58847 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 15:05:53.287879   58847 start.go:340] cluster config:
	{Name:cert-expiration-724000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:cert-expiration-724000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 15:05:53.287963   58847 iso.go:125] acquiring lock: {Name:mk0da19ac8d2d553b5039d86a6857a5ca35625a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 15:05:53.330729   58847 out.go:177] * Starting "cert-expiration-724000" primary control-plane node in "cert-expiration-724000" cluster
	I0505 15:05:53.351712   58847 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 15:05:53.351781   58847 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0505 15:05:53.351805   58847 cache.go:56] Caching tarball of preloaded images
	I0505 15:05:53.352012   58847 preload.go:173] Found /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0505 15:05:53.352025   58847 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 15:05:53.352167   58847 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/config.json ...
	I0505 15:05:53.352197   58847 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/config.json: {Name:mke285d023f3121e1246b2c66b32a223dab18ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 15:05:53.352936   58847 start.go:360] acquireMachinesLock for cert-expiration-724000: {Name:mkf65fb2e833767d0359abdd5cbc015622c5b2df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0505 15:05:53.353065   58847 start.go:364] duration metric: took 100.664µs to acquireMachinesLock for "cert-expiration-724000"
	I0505 15:05:53.353107   58847 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-724000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.0 ClusterName:cert-expiration-724000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 15:05:53.353189   58847 start.go:125] createHost starting for "" (driver="hyperkit")
	I0505 15:05:53.374723   58847 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0505 15:05:53.374993   58847 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 15:05:53.375050   58847 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 15:05:53.384885   58847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60672
	I0505 15:05:53.385257   58847 main.go:141] libmachine: () Calling .GetVersion
	I0505 15:05:53.385700   58847 main.go:141] libmachine: Using API Version  1
	I0505 15:05:53.385707   58847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 15:05:53.385973   58847 main.go:141] libmachine: () Calling .GetMachineName
	I0505 15:05:53.386091   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetMachineName
	I0505 15:05:53.386177   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .DriverName
	I0505 15:05:53.386283   58847 start.go:159] libmachine.API.Create for "cert-expiration-724000" (driver="hyperkit")
	I0505 15:05:53.386307   58847 client.go:168] LocalClient.Create starting
	I0505 15:05:53.386344   58847 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem
	I0505 15:05:53.386391   58847 main.go:141] libmachine: Decoding PEM data...
	I0505 15:05:53.386410   58847 main.go:141] libmachine: Parsing certificate...
	I0505 15:05:53.386470   58847 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem
	I0505 15:05:53.386503   58847 main.go:141] libmachine: Decoding PEM data...
	I0505 15:05:53.386513   58847 main.go:141] libmachine: Parsing certificate...
	I0505 15:05:53.386523   58847 main.go:141] libmachine: Running pre-create checks...
	I0505 15:05:53.386532   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .PreCreateCheck
	I0505 15:05:53.386610   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 15:05:53.386768   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetConfigRaw
	I0505 15:05:53.387257   58847 main.go:141] libmachine: Creating machine...
	I0505 15:05:53.387261   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .Create
	I0505 15:05:53.387330   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 15:05:53.387452   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | I0505 15:05:53.387325   58855 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/18602-53665/.minikube
	I0505 15:05:53.387502   58847 main.go:141] libmachine: (cert-expiration-724000) Downloading /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18602-53665/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0505 15:05:53.567750   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | I0505 15:05:53.567673   58855 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/id_rsa...
	I0505 15:05:53.614398   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | I0505 15:05:53.614319   58855 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/cert-expiration-724000.rawdisk...
	I0505 15:05:53.614407   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Writing magic tar header
	I0505 15:05:53.614415   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Writing SSH key tar header
	I0505 15:05:53.614892   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | I0505 15:05:53.614848   58855 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000 ...
	I0505 15:05:53.973454   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 15:05:53.973468   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/hyperkit.pid
	I0505 15:05:53.973573   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Using UUID 1f5ee289-99f6-4e4c-8d84-e3a071f57163
	I0505 15:05:53.999991   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Generated MAC da:31:86:87:68:91
	I0505 15:05:54.000005   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=cert-expiration-724000
	I0505 15:05:54.000046   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:53 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1f5ee289-99f6-4e4c-8d84-e3a071f57163", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001141b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(ni
l), CmdLine:"", process:(*os.Process)(nil)}
	I0505 15:05:54.000086   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:53 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"1f5ee289-99f6-4e4c-8d84-e3a071f57163", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001141b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(ni
l), CmdLine:"", process:(*os.Process)(nil)}
	I0505 15:05:54.000133   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:54 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "1f5ee289-99f6-4e4c-8d84-e3a071f57163", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/cert-expiration-724000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-7
24000/bzimage,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=cert-expiration-724000"}
	I0505 15:05:54.000226   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:54 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 1f5ee289-99f6-4e4c-8d84-e3a071f57163 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/cert-expiration-724000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/tty,log=/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/console-ring -f kexec,/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/bzimage,/Users/jenkins/minikube-integration/18602-53665/
.minikube/machines/cert-expiration-724000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=cert-expiration-724000"
	I0505 15:05:54.000257   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:54 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0505 15:05:54.003347   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:54 DEBUG: hyperkit: Pid is 58857
	I0505 15:05:54.003772   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Attempt 0
	I0505 15:05:54.003784   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 15:05:54.003858   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | hyperkit pid from json: 58857
	I0505 15:05:54.004863   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Searching for da:31:86:87:68:91 in /var/db/dhcpd_leases ...
	I0505 15:05:54.004996   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Found 76 entries in /var/db/dhcpd_leases!
	I0505 15:05:54.005007   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.77 HWAddress:92:27:6e:8d:99:a5 ID:1,92:27:6e:8d:99:a5 Lease:0x66395420}
	I0505 15:05:54.005022   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.76 HWAddress:6a:5d:2f:35:a:94 ID:1,6a:5d:2f:35:a:94 Lease:0x66380281}
	I0505 15:05:54.005030   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.75 HWAddress:6:88:b5:e0:36:68 ID:1,6:88:b5:e0:36:68 Lease:0x66380258}
	I0505 15:05:54.005039   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.74 HWAddress:f6:db:20:e0:7a:83 ID:1,f6:db:20:e0:7a:83 Lease:0x66380224}
	I0505 15:05:54.005054   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.73 HWAddress:ee:54:24:e4:76:67 ID:1,ee:54:24:e4:76:67 Lease:0x66395366}
	I0505 15:05:54.005063   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.72 HWAddress:ea:8b:c:17:38:4b ID:1,ea:8b:c:17:38:4b Lease:0x66395323}
	I0505 15:05:54.005079   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.71 HWAddress:be:f9:81:f1:c7:d1 ID:1,be:f9:81:f1:c7:d1 Lease:0x66395339}
	I0505 15:05:54.005104   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.70 HWAddress:b6:47:e7:3b:2b:f3 ID:1,b6:47:e7:3b:2b:f3 Lease:0x66380196}
	I0505 15:05:54.005125   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.69 HWAddress:32:cd:dd:53:1a:6e ID:1,32:cd:dd:53:1a:6e Lease:0x66395296}
	I0505 15:05:54.005138   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.68 HWAddress:4a:c4:c7:8a:9b:bd ID:1,4a:c4:c7:8a:9b:bd Lease:0x663951b3}
	I0505 15:05:54.005150   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.67 HWAddress:22:f1:5e:b:9f:88 ID:1,22:f1:5e:b:9f:88 Lease:0x66395143}
	I0505 15:05:54.005180   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.66 HWAddress:8e:ae:1c:5e:d4:a3 ID:1,8e:ae:1c:5e:d4:a3 Lease:0x66395116}
	I0505 15:05:54.005186   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.65 HWAddress:8e:fb:49:90:3:fd ID:1,8e:fb:49:90:3:fd Lease:0x6637ff23}
	I0505 15:05:54.005192   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.64 HWAddress:16:3d:8d:41:14:5b ID:1,16:3d:8d:41:14:5b Lease:0x6637fe9c}
	I0505 15:05:54.005196   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.63 HWAddress:1e:24:a1:ec:61:e0 ID:1,1e:24:a1:ec:61:e0 Lease:0x66395061}
	I0505 15:05:54.005217   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.62 HWAddress:2e:b4:4f:76:ed:b0 ID:1,2e:b4:4f:76:ed:b0 Lease:0x66395037}
	I0505 15:05:54.005229   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.61 HWAddress:9e:35:93:ba:99:63 ID:1,9e:35:93:ba:99:63 Lease:0x6637fcc8}
	I0505 15:05:54.005253   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.60 HWAddress:ba:41:eb:61:2d:8d ID:1,ba:41:eb:61:2d:8d Lease:0x6637fc99}
	I0505 15:05:54.005262   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.59 HWAddress:32:3a:8d:af:e6:e7 ID:1,32:3a:8d:af:e6:e7 Lease:0x6637fc68}
	I0505 15:05:54.005271   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.58 HWAddress:32:21:b4:74:53:fd ID:1,32:21:b4:74:53:fd Lease:0x66394da0}
	I0505 15:05:54.005277   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.57 HWAddress:66:ee:c3:ef:61:3b ID:1,66:ee:c3:ef:61:3b Lease:0x66394d35}
	I0505 15:05:54.005304   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.56 HWAddress:5e:4f:b4:14:62:5 ID:1,5e:4f:b4:14:62:5 Lease:0x66394d07}
	I0505 15:05:54.005310   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.55 HWAddress:f6:52:28:c9:d6:ef ID:1,f6:52:28:c9:d6:ef Lease:0x6637fb7b}
	I0505 15:05:54.005315   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x66394b0c}
	I0505 15:05:54.005324   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x66394a07}
	I0505 15:05:54.005332   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394aea}
	I0505 15:05:54.005339   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x66394ad8}
	I0505 15:05:54.005348   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:52:c5:3c:9d:14:e ID:1,52:c5:3c:9d:14:e Lease:0x663946c7}
	I0505 15:05:54.005361   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:e2:e0:ed:33:dd:5e ID:1,e2:e0:ed:33:dd:5e Lease:0x6637f4a5}
	I0505 15:05:54.005367   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:36:8b:71:f2:1a:8e ID:1,36:8b:71:f2:1a:8e Lease:0x66394428}
	I0505 15:05:54.005371   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:aa:f4:65:9c:ae:46 ID:1,aa:f4:65:9c:ae:46 Lease:0x66391eb5}
	I0505 15:05:54.005384   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:fe:3a:ee:70:a1:f4 ID:1,fe:3a:ee:70:a1:f4 Lease:0x66391e3b}
	I0505 15:05:54.005393   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:a6:ae:ed:1a:27:3 ID:1,a6:ae:ed:1a:27:3 Lease:0x66391d28}
	I0505 15:05:54.005408   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:7e:2a:b5:61:3a:bd ID:1,7e:2a:b5:61:3a:bd Lease:0x66391c4f}
	I0505 15:05:54.005413   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:d6:14:fe:95:d1:bb ID:1,d6:14:fe:95:d1:bb Lease:0x66391c1d}
	I0505 15:05:54.005418   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:2a:be:b1:4e:ea:a ID:1,2a:be:b1:4e:ea:a Lease:0x66391b62}
	I0505 15:05:54.005423   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:da:9f:e2:ec:3c:9b ID:1,da:9f:e2:ec:3c:9b Lease:0x66391b0a}
	I0505 15:05:54.005428   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:d6:4e:a2:a0:55:dc ID:1,d6:4e:a2:a0:55:dc Lease:0x66391af3}
	I0505 15:05:54.005435   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:a6:e3:ff:16:85:55 ID:1,a6:e3:ff:16:85:55 Lease:0x66391a30}
	I0505 15:05:54.005441   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:7a:f6:c5:4:75:e5 ID:1,7a:f6:c5:4:75:e5 Lease:0x66391a1f}
	I0505 15:05:54.005445   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:8e:6b:8b:ff:c9:7 ID:1,8e:6b:8b:ff:c9:7 Lease:0x663919de}
	I0505 15:05:54.005454   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:2:e2:2:6b:6b:57 ID:1,2:e2:2:6b:6b:57 Lease:0x663919c2}
	I0505 15:05:54.005462   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:5a:ae:9c:a1:2c:f3 ID:1,5a:ae:9c:a1:2c:f3 Lease:0x66391973}
	I0505 15:05:54.005468   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:d2:88:f3:9:fe:cd ID:1,d2:88:f3:9:fe:cd Lease:0x66391947}
	I0505 15:05:54.005475   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:66:50:86:0:9c:7f ID:1,66:50:86:0:9c:7f Lease:0x6637c7bc}
	I0505 15:05:54.005495   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:3a:9c:ad:a2:3:6a ID:1,3a:9c:ad:a2:3:6a Lease:0x6637c78e}
	I0505 15:05:54.005502   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:12:b5:58:55:5e:71 ID:1,12:b5:58:55:5e:71 Lease:0x663918d9}
	I0505 15:05:54.005507   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:76:18:d8:55:2a:95 ID:1,76:18:d8:55:2a:95 Lease:0x663918bb}
	I0505 15:05:54.005515   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:82:1f:3c:c0:72:91 ID:1,82:1f:3c:c0:72:91 Lease:0x66391897}
	I0505 15:05:54.005520   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ca:d8:83:72:cb:c4 ID:1,ca:d8:83:72:cb:c4 Lease:0x6639183f}
	I0505 15:05:54.005524   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:32:fd:f9:e7:8:bb ID:1,32:fd:f9:e7:8:bb Lease:0x66391784}
	I0505 15:05:54.005529   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ae:83:a6:bf:23:4b ID:1,ae:83:a6:bf:23:4b Lease:0x66391760}
	I0505 15:05:54.005540   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:82:cc:83:cb:6c:47 ID:1,82:cc:83:cb:6c:47 Lease:0x66391734}
	I0505 15:05:54.005548   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:4a:af:1b:be:ef:ad ID:1,4a:af:1b:be:ef:ad Lease:0x6639170a}
	I0505 15:05:54.005553   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:e6:9b:2e:f9:86:de ID:1,e6:9b:2e:f9:86:de Lease:0x6637c5f9}
	I0505 15:05:54.005557   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:46:a7:e6:29:52:8c ID:1,46:a7:e6:29:52:8c Lease:0x663916cd}
	I0505 15:05:54.005570   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:b2:82:eb:21:cf:b7 ID:1,b2:82:eb:21:cf:b7 Lease:0x6639165f}
	I0505 15:05:54.005577   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:52:d3:86:65:d6:a9 ID:1,52:d3:86:65:d6:a9 Lease:0x663915f0}
	I0505 15:05:54.005587   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:7e:b9:b1:4a:c5:24 ID:1,7e:b9:b1:4a:c5:24 Lease:0x663915c3}
	I0505 15:05:54.005592   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:a:24:1:17:3a:66 ID:1,a:24:1:17:3a:66 Lease:0x6637c34e}
	I0505 15:05:54.005602   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:c5:38:c6:61:9a ID:1,9e:c5:38:c6:61:9a Lease:0x66391514}
	I0505 15:05:54.005607   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:8a:2f:87:5c:8d:d4 ID:1,8a:2f:87:5c:8d:d4 Lease:0x663914ea}
	I0505 15:05:54.005612   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9e:54:9b:5b:ae:fb ID:1,9e:54:9b:5b:ae:fb Lease:0x66390dcf}
	I0505 15:05:54.005616   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:12:1b:d2:24:1f:f6 ID:1,12:1b:d2:24:1f:f6 Lease:0x6637bc3f}
	I0505 15:05:54.005623   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f6:e7:d9:1:6b:86 ID:1,f6:e7:d9:1:6b:86 Lease:0x66390d76}
	I0505 15:05:54.005628   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:d6:bd:9c:d7:d7 ID:1,d6:d6:bd:9c:d7:d7 Lease:0x66390cbd}
	I0505 15:05:54.005632   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:b2:a5:b1:29:d1:85 ID:1,b2:a5:b1:29:d1:85 Lease:0x6637bb33}
	I0505 15:05:54.005647   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:36:67:2:e8:f4:c1 ID:1,36:67:2:e8:f4:c1 Lease:0x66390c34}
	I0505 15:05:54.005659   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:32:7a:24:40:4c:e9 ID:1,32:7a:24:40:4c:e9 Lease:0x66390c1a}
	I0505 15:05:54.005665   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:8e:b1:47:5c:8f:bb ID:1,8e:b1:47:5c:8f:bb Lease:0x6637ba42}
	I0505 15:05:54.005673   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:42:38:dd:54:2a:ca ID:1,42:38:dd:54:2a:ca Lease:0x66390bf8}
	I0505 15:05:54.005678   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b6:c9:cf:d:ee:b0 ID:1,b6:c9:cf:d:ee:b0 Lease:0x66390be6}
	I0505 15:05:54.005683   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4e:81:5b:49:bb:17 ID:1,4e:81:5b:49:bb:17 Lease:0x663907ac}
	I0505 15:05:54.005692   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:1e:72:bc:80:53:85 ID:1,1e:72:bc:80:53:85 Lease:0x6637b58a}
	I0505 15:05:54.005706   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:4a:8e:d2:30:61 ID:1,de:4a:8e:d2:30:61 Lease:0x663905e7}
	I0505 15:05:54.005715   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x663943e8}
	I0505 15:05:54.010802   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:54 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0505 15:05:54.019130   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:54 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0505 15:05:54.020043   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 15:05:54.020053   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 15:05:54.020059   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 15:05:54.020063   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 15:05:54.399133   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0505 15:05:54.399140   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0505 15:05:54.514562   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0505 15:05:54.514579   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0505 15:05:54.514588   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0505 15:05:54.514595   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0505 15:05:54.515447   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0505 15:05:54.515462   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0505 15:05:56.006565   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Attempt 1
	I0505 15:05:56.006576   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 15:05:56.006673   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | hyperkit pid from json: 58857
	I0505 15:05:56.007460   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Searching for da:31:86:87:68:91 in /var/db/dhcpd_leases ...
	I0505 15:05:56.007564   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Found 76 entries in /var/db/dhcpd_leases!
	I0505 15:05:56.007578   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.77 HWAddress:92:27:6e:8d:99:a5 ID:1,92:27:6e:8d:99:a5 Lease:0x66395420}
	I0505 15:05:56.007595   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.76 HWAddress:6a:5d:2f:35:a:94 ID:1,6a:5d:2f:35:a:94 Lease:0x66380281}
	I0505 15:05:56.007599   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.75 HWAddress:6:88:b5:e0:36:68 ID:1,6:88:b5:e0:36:68 Lease:0x66380258}
	I0505 15:05:56.007607   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.74 HWAddress:f6:db:20:e0:7a:83 ID:1,f6:db:20:e0:7a:83 Lease:0x66380224}
	I0505 15:05:56.007613   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.73 HWAddress:ee:54:24:e4:76:67 ID:1,ee:54:24:e4:76:67 Lease:0x66395366}
	I0505 15:05:56.007621   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.72 HWAddress:ea:8b:c:17:38:4b ID:1,ea:8b:c:17:38:4b Lease:0x66395323}
	I0505 15:05:56.007625   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.71 HWAddress:be:f9:81:f1:c7:d1 ID:1,be:f9:81:f1:c7:d1 Lease:0x66395339}
	I0505 15:05:56.007630   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.70 HWAddress:b6:47:e7:3b:2b:f3 ID:1,b6:47:e7:3b:2b:f3 Lease:0x66380196}
	I0505 15:05:56.007634   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.69 HWAddress:32:cd:dd:53:1a:6e ID:1,32:cd:dd:53:1a:6e Lease:0x66395296}
	I0505 15:05:56.007645   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.68 HWAddress:4a:c4:c7:8a:9b:bd ID:1,4a:c4:c7:8a:9b:bd Lease:0x663951b3}
	I0505 15:05:56.007653   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.67 HWAddress:22:f1:5e:b:9f:88 ID:1,22:f1:5e:b:9f:88 Lease:0x66395143}
	I0505 15:05:56.007662   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.66 HWAddress:8e:ae:1c:5e:d4:a3 ID:1,8e:ae:1c:5e:d4:a3 Lease:0x66395116}
	I0505 15:05:56.007669   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.65 HWAddress:8e:fb:49:90:3:fd ID:1,8e:fb:49:90:3:fd Lease:0x6637ff23}
	I0505 15:05:56.007675   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.64 HWAddress:16:3d:8d:41:14:5b ID:1,16:3d:8d:41:14:5b Lease:0x6637fe9c}
	I0505 15:05:56.007679   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.63 HWAddress:1e:24:a1:ec:61:e0 ID:1,1e:24:a1:ec:61:e0 Lease:0x66395061}
	I0505 15:05:56.007684   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.62 HWAddress:2e:b4:4f:76:ed:b0 ID:1,2e:b4:4f:76:ed:b0 Lease:0x66395037}
	I0505 15:05:56.007698   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.61 HWAddress:9e:35:93:ba:99:63 ID:1,9e:35:93:ba:99:63 Lease:0x6637fcc8}
	I0505 15:05:56.007704   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.60 HWAddress:ba:41:eb:61:2d:8d ID:1,ba:41:eb:61:2d:8d Lease:0x6637fc99}
	I0505 15:05:56.007708   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.59 HWAddress:32:3a:8d:af:e6:e7 ID:1,32:3a:8d:af:e6:e7 Lease:0x6637fc68}
	I0505 15:05:56.007715   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.58 HWAddress:32:21:b4:74:53:fd ID:1,32:21:b4:74:53:fd Lease:0x66394da0}
	I0505 15:05:56.007720   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.57 HWAddress:66:ee:c3:ef:61:3b ID:1,66:ee:c3:ef:61:3b Lease:0x66394d35}
	I0505 15:05:56.007725   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.56 HWAddress:5e:4f:b4:14:62:5 ID:1,5e:4f:b4:14:62:5 Lease:0x66394d07}
	I0505 15:05:56.007733   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.55 HWAddress:f6:52:28:c9:d6:ef ID:1,f6:52:28:c9:d6:ef Lease:0x6637fb7b}
	I0505 15:05:56.007738   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x66394b0c}
	I0505 15:05:56.007743   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x66394a07}
	I0505 15:05:56.007747   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394aea}
	I0505 15:05:56.007753   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x66394ad8}
	I0505 15:05:56.007757   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:52:c5:3c:9d:14:e ID:1,52:c5:3c:9d:14:e Lease:0x663946c7}
	I0505 15:05:56.007763   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:e2:e0:ed:33:dd:5e ID:1,e2:e0:ed:33:dd:5e Lease:0x6637f4a5}
	I0505 15:05:56.007767   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:36:8b:71:f2:1a:8e ID:1,36:8b:71:f2:1a:8e Lease:0x66394428}
	I0505 15:05:56.007772   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:aa:f4:65:9c:ae:46 ID:1,aa:f4:65:9c:ae:46 Lease:0x66391eb5}
	I0505 15:05:56.007779   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:fe:3a:ee:70:a1:f4 ID:1,fe:3a:ee:70:a1:f4 Lease:0x66391e3b}
	I0505 15:05:56.007784   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:a6:ae:ed:1a:27:3 ID:1,a6:ae:ed:1a:27:3 Lease:0x66391d28}
	I0505 15:05:56.007788   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:7e:2a:b5:61:3a:bd ID:1,7e:2a:b5:61:3a:bd Lease:0x66391c4f}
	I0505 15:05:56.007793   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:d6:14:fe:95:d1:bb ID:1,d6:14:fe:95:d1:bb Lease:0x66391c1d}
	I0505 15:05:56.007797   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:2a:be:b1:4e:ea:a ID:1,2a:be:b1:4e:ea:a Lease:0x66391b62}
	I0505 15:05:56.007806   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:da:9f:e2:ec:3c:9b ID:1,da:9f:e2:ec:3c:9b Lease:0x66391b0a}
	I0505 15:05:56.007812   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:d6:4e:a2:a0:55:dc ID:1,d6:4e:a2:a0:55:dc Lease:0x66391af3}
	I0505 15:05:56.007817   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:a6:e3:ff:16:85:55 ID:1,a6:e3:ff:16:85:55 Lease:0x66391a30}
	I0505 15:05:56.007824   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:7a:f6:c5:4:75:e5 ID:1,7a:f6:c5:4:75:e5 Lease:0x66391a1f}
	I0505 15:05:56.007830   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:8e:6b:8b:ff:c9:7 ID:1,8e:6b:8b:ff:c9:7 Lease:0x663919de}
	I0505 15:05:56.007836   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:2:e2:2:6b:6b:57 ID:1,2:e2:2:6b:6b:57 Lease:0x663919c2}
	I0505 15:05:56.007845   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:5a:ae:9c:a1:2c:f3 ID:1,5a:ae:9c:a1:2c:f3 Lease:0x66391973}
	I0505 15:05:56.007851   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:d2:88:f3:9:fe:cd ID:1,d2:88:f3:9:fe:cd Lease:0x66391947}
	I0505 15:05:56.007856   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:66:50:86:0:9c:7f ID:1,66:50:86:0:9c:7f Lease:0x6637c7bc}
	I0505 15:05:56.007860   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:3a:9c:ad:a2:3:6a ID:1,3a:9c:ad:a2:3:6a Lease:0x6637c78e}
	I0505 15:05:56.007908   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:12:b5:58:55:5e:71 ID:1,12:b5:58:55:5e:71 Lease:0x663918d9}
	I0505 15:05:56.007928   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:76:18:d8:55:2a:95 ID:1,76:18:d8:55:2a:95 Lease:0x663918bb}
	I0505 15:05:56.007934   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:82:1f:3c:c0:72:91 ID:1,82:1f:3c:c0:72:91 Lease:0x66391897}
	I0505 15:05:56.007938   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ca:d8:83:72:cb:c4 ID:1,ca:d8:83:72:cb:c4 Lease:0x6639183f}
	I0505 15:05:56.007943   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:32:fd:f9:e7:8:bb ID:1,32:fd:f9:e7:8:bb Lease:0x66391784}
	I0505 15:05:56.007948   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ae:83:a6:bf:23:4b ID:1,ae:83:a6:bf:23:4b Lease:0x66391760}
	I0505 15:05:56.007955   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:82:cc:83:cb:6c:47 ID:1,82:cc:83:cb:6c:47 Lease:0x66391734}
	I0505 15:05:56.007960   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:4a:af:1b:be:ef:ad ID:1,4a:af:1b:be:ef:ad Lease:0x6639170a}
	I0505 15:05:56.007964   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:e6:9b:2e:f9:86:de ID:1,e6:9b:2e:f9:86:de Lease:0x6637c5f9}
	I0505 15:05:56.007969   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:46:a7:e6:29:52:8c ID:1,46:a7:e6:29:52:8c Lease:0x663916cd}
	I0505 15:05:56.007976   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:b2:82:eb:21:cf:b7 ID:1,b2:82:eb:21:cf:b7 Lease:0x6639165f}
	I0505 15:05:56.007980   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:52:d3:86:65:d6:a9 ID:1,52:d3:86:65:d6:a9 Lease:0x663915f0}
	I0505 15:05:56.007985   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:7e:b9:b1:4a:c5:24 ID:1,7e:b9:b1:4a:c5:24 Lease:0x663915c3}
	I0505 15:05:56.007992   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:a:24:1:17:3a:66 ID:1,a:24:1:17:3a:66 Lease:0x6637c34e}
	I0505 15:05:56.007997   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:c5:38:c6:61:9a ID:1,9e:c5:38:c6:61:9a Lease:0x66391514}
	I0505 15:05:56.008002   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:8a:2f:87:5c:8d:d4 ID:1,8a:2f:87:5c:8d:d4 Lease:0x663914ea}
	I0505 15:05:56.008008   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9e:54:9b:5b:ae:fb ID:1,9e:54:9b:5b:ae:fb Lease:0x66390dcf}
	I0505 15:05:56.008012   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:12:1b:d2:24:1f:f6 ID:1,12:1b:d2:24:1f:f6 Lease:0x6637bc3f}
	I0505 15:05:56.008031   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f6:e7:d9:1:6b:86 ID:1,f6:e7:d9:1:6b:86 Lease:0x66390d76}
	I0505 15:05:56.008041   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:d6:bd:9c:d7:d7 ID:1,d6:d6:bd:9c:d7:d7 Lease:0x66390cbd}
	I0505 15:05:56.008050   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:b2:a5:b1:29:d1:85 ID:1,b2:a5:b1:29:d1:85 Lease:0x6637bb33}
	I0505 15:05:56.008056   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:36:67:2:e8:f4:c1 ID:1,36:67:2:e8:f4:c1 Lease:0x66390c34}
	I0505 15:05:56.008061   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:32:7a:24:40:4c:e9 ID:1,32:7a:24:40:4c:e9 Lease:0x66390c1a}
	I0505 15:05:56.008066   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:8e:b1:47:5c:8f:bb ID:1,8e:b1:47:5c:8f:bb Lease:0x6637ba42}
	I0505 15:05:56.008070   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:42:38:dd:54:2a:ca ID:1,42:38:dd:54:2a:ca Lease:0x66390bf8}
	I0505 15:05:56.008075   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b6:c9:cf:d:ee:b0 ID:1,b6:c9:cf:d:ee:b0 Lease:0x66390be6}
	I0505 15:05:56.008079   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4e:81:5b:49:bb:17 ID:1,4e:81:5b:49:bb:17 Lease:0x663907ac}
	I0505 15:05:56.008084   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:1e:72:bc:80:53:85 ID:1,1e:72:bc:80:53:85 Lease:0x6637b58a}
	I0505 15:05:56.008089   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:4a:8e:d2:30:61 ID:1,de:4a:8e:d2:30:61 Lease:0x663905e7}
	I0505 15:05:56.008098   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x663943e8}
	I0505 15:05:58.008063   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Attempt 2
	I0505 15:05:58.008073   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 15:05:58.008149   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | hyperkit pid from json: 58857
	I0505 15:05:58.008934   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Searching for da:31:86:87:68:91 in /var/db/dhcpd_leases ...
	I0505 15:05:58.009031   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Found 76 entries in /var/db/dhcpd_leases!
	I0505 15:05:58.009037   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.77 HWAddress:92:27:6e:8d:99:a5 ID:1,92:27:6e:8d:99:a5 Lease:0x66395420}
	I0505 15:05:58.009044   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.76 HWAddress:6a:5d:2f:35:a:94 ID:1,6a:5d:2f:35:a:94 Lease:0x66380281}
	I0505 15:05:58.009048   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.75 HWAddress:6:88:b5:e0:36:68 ID:1,6:88:b5:e0:36:68 Lease:0x66380258}
	I0505 15:05:58.009053   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.74 HWAddress:f6:db:20:e0:7a:83 ID:1,f6:db:20:e0:7a:83 Lease:0x66380224}
	I0505 15:05:58.009057   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.73 HWAddress:ee:54:24:e4:76:67 ID:1,ee:54:24:e4:76:67 Lease:0x66395366}
	I0505 15:05:58.009062   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.72 HWAddress:ea:8b:c:17:38:4b ID:1,ea:8b:c:17:38:4b Lease:0x66395323}
	I0505 15:05:58.009067   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.71 HWAddress:be:f9:81:f1:c7:d1 ID:1,be:f9:81:f1:c7:d1 Lease:0x66395339}
	I0505 15:05:58.009073   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.70 HWAddress:b6:47:e7:3b:2b:f3 ID:1,b6:47:e7:3b:2b:f3 Lease:0x66380196}
	I0505 15:05:58.009077   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.69 HWAddress:32:cd:dd:53:1a:6e ID:1,32:cd:dd:53:1a:6e Lease:0x66395296}
	I0505 15:05:58.009091   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.68 HWAddress:4a:c4:c7:8a:9b:bd ID:1,4a:c4:c7:8a:9b:bd Lease:0x663951b3}
	I0505 15:05:58.009100   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.67 HWAddress:22:f1:5e:b:9f:88 ID:1,22:f1:5e:b:9f:88 Lease:0x66395143}
	I0505 15:05:58.009106   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.66 HWAddress:8e:ae:1c:5e:d4:a3 ID:1,8e:ae:1c:5e:d4:a3 Lease:0x66395116}
	I0505 15:05:58.009114   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.65 HWAddress:8e:fb:49:90:3:fd ID:1,8e:fb:49:90:3:fd Lease:0x6637ff23}
	I0505 15:05:58.009120   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.64 HWAddress:16:3d:8d:41:14:5b ID:1,16:3d:8d:41:14:5b Lease:0x6637fe9c}
	I0505 15:05:58.009132   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.63 HWAddress:1e:24:a1:ec:61:e0 ID:1,1e:24:a1:ec:61:e0 Lease:0x66395061}
	I0505 15:05:58.009139   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.62 HWAddress:2e:b4:4f:76:ed:b0 ID:1,2e:b4:4f:76:ed:b0 Lease:0x66395037}
	I0505 15:05:58.009147   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.61 HWAddress:9e:35:93:ba:99:63 ID:1,9e:35:93:ba:99:63 Lease:0x6637fcc8}
	I0505 15:05:58.009152   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.60 HWAddress:ba:41:eb:61:2d:8d ID:1,ba:41:eb:61:2d:8d Lease:0x6637fc99}
	I0505 15:05:58.009156   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.59 HWAddress:32:3a:8d:af:e6:e7 ID:1,32:3a:8d:af:e6:e7 Lease:0x6637fc68}
	I0505 15:05:58.009163   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.58 HWAddress:32:21:b4:74:53:fd ID:1,32:21:b4:74:53:fd Lease:0x66394da0}
	I0505 15:05:58.009168   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.57 HWAddress:66:ee:c3:ef:61:3b ID:1,66:ee:c3:ef:61:3b Lease:0x66394d35}
	I0505 15:05:58.009172   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.56 HWAddress:5e:4f:b4:14:62:5 ID:1,5e:4f:b4:14:62:5 Lease:0x66394d07}
	I0505 15:05:58.009177   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.55 HWAddress:f6:52:28:c9:d6:ef ID:1,f6:52:28:c9:d6:ef Lease:0x6637fb7b}
	I0505 15:05:58.009188   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x66394b0c}
	I0505 15:05:58.009193   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x66394a07}
	I0505 15:05:58.009198   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394aea}
	I0505 15:05:58.009204   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x66394ad8}
	I0505 15:05:58.009210   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:52:c5:3c:9d:14:e ID:1,52:c5:3c:9d:14:e Lease:0x663946c7}
	I0505 15:05:58.009218   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:e2:e0:ed:33:dd:5e ID:1,e2:e0:ed:33:dd:5e Lease:0x6637f4a5}
	I0505 15:05:58.009223   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:36:8b:71:f2:1a:8e ID:1,36:8b:71:f2:1a:8e Lease:0x66394428}
	I0505 15:05:58.009229   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:aa:f4:65:9c:ae:46 ID:1,aa:f4:65:9c:ae:46 Lease:0x66391eb5}
	I0505 15:05:58.009237   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:fe:3a:ee:70:a1:f4 ID:1,fe:3a:ee:70:a1:f4 Lease:0x66391e3b}
	I0505 15:05:58.009254   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:a6:ae:ed:1a:27:3 ID:1,a6:ae:ed:1a:27:3 Lease:0x66391d28}
	I0505 15:05:58.009260   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:7e:2a:b5:61:3a:bd ID:1,7e:2a:b5:61:3a:bd Lease:0x66391c4f}
	I0505 15:05:58.009265   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:d6:14:fe:95:d1:bb ID:1,d6:14:fe:95:d1:bb Lease:0x66391c1d}
	I0505 15:05:58.009269   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:2a:be:b1:4e:ea:a ID:1,2a:be:b1:4e:ea:a Lease:0x66391b62}
	I0505 15:05:58.009274   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:da:9f:e2:ec:3c:9b ID:1,da:9f:e2:ec:3c:9b Lease:0x66391b0a}
	I0505 15:05:58.009278   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:d6:4e:a2:a0:55:dc ID:1,d6:4e:a2:a0:55:dc Lease:0x66391af3}
	I0505 15:05:58.009283   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:a6:e3:ff:16:85:55 ID:1,a6:e3:ff:16:85:55 Lease:0x66391a30}
	I0505 15:05:58.009289   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:7a:f6:c5:4:75:e5 ID:1,7a:f6:c5:4:75:e5 Lease:0x66391a1f}
	I0505 15:05:58.009294   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:8e:6b:8b:ff:c9:7 ID:1,8e:6b:8b:ff:c9:7 Lease:0x663919de}
	I0505 15:05:58.009299   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:2:e2:2:6b:6b:57 ID:1,2:e2:2:6b:6b:57 Lease:0x663919c2}
	I0505 15:05:58.009304   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:5a:ae:9c:a1:2c:f3 ID:1,5a:ae:9c:a1:2c:f3 Lease:0x66391973}
	I0505 15:05:58.009310   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:d2:88:f3:9:fe:cd ID:1,d2:88:f3:9:fe:cd Lease:0x66391947}
	I0505 15:05:58.009315   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:66:50:86:0:9c:7f ID:1,66:50:86:0:9c:7f Lease:0x6637c7bc}
	I0505 15:05:58.009320   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:3a:9c:ad:a2:3:6a ID:1,3a:9c:ad:a2:3:6a Lease:0x6637c78e}
	I0505 15:05:58.009325   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:12:b5:58:55:5e:71 ID:1,12:b5:58:55:5e:71 Lease:0x663918d9}
	I0505 15:05:58.009331   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:76:18:d8:55:2a:95 ID:1,76:18:d8:55:2a:95 Lease:0x663918bb}
	I0505 15:05:58.009338   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:82:1f:3c:c0:72:91 ID:1,82:1f:3c:c0:72:91 Lease:0x66391897}
	I0505 15:05:58.009342   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ca:d8:83:72:cb:c4 ID:1,ca:d8:83:72:cb:c4 Lease:0x6639183f}
	I0505 15:05:58.009348   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:32:fd:f9:e7:8:bb ID:1,32:fd:f9:e7:8:bb Lease:0x66391784}
	I0505 15:05:58.009354   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ae:83:a6:bf:23:4b ID:1,ae:83:a6:bf:23:4b Lease:0x66391760}
	I0505 15:05:58.009360   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:82:cc:83:cb:6c:47 ID:1,82:cc:83:cb:6c:47 Lease:0x66391734}
	I0505 15:05:58.009365   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:4a:af:1b:be:ef:ad ID:1,4a:af:1b:be:ef:ad Lease:0x6639170a}
	I0505 15:05:58.009369   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:e6:9b:2e:f9:86:de ID:1,e6:9b:2e:f9:86:de Lease:0x6637c5f9}
	I0505 15:05:58.009373   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:46:a7:e6:29:52:8c ID:1,46:a7:e6:29:52:8c Lease:0x663916cd}
	I0505 15:05:58.009380   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:b2:82:eb:21:cf:b7 ID:1,b2:82:eb:21:cf:b7 Lease:0x6639165f}
	I0505 15:05:58.009387   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:52:d3:86:65:d6:a9 ID:1,52:d3:86:65:d6:a9 Lease:0x663915f0}
	I0505 15:05:58.009392   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:7e:b9:b1:4a:c5:24 ID:1,7e:b9:b1:4a:c5:24 Lease:0x663915c3}
	I0505 15:05:58.009396   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:a:24:1:17:3a:66 ID:1,a:24:1:17:3a:66 Lease:0x6637c34e}
	I0505 15:05:58.009400   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:c5:38:c6:61:9a ID:1,9e:c5:38:c6:61:9a Lease:0x66391514}
	I0505 15:05:58.009405   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:8a:2f:87:5c:8d:d4 ID:1,8a:2f:87:5c:8d:d4 Lease:0x663914ea}
	I0505 15:05:58.009409   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9e:54:9b:5b:ae:fb ID:1,9e:54:9b:5b:ae:fb Lease:0x66390dcf}
	I0505 15:05:58.009414   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:12:1b:d2:24:1f:f6 ID:1,12:1b:d2:24:1f:f6 Lease:0x6637bc3f}
	I0505 15:05:58.009419   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f6:e7:d9:1:6b:86 ID:1,f6:e7:d9:1:6b:86 Lease:0x66390d76}
	I0505 15:05:58.009423   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:d6:bd:9c:d7:d7 ID:1,d6:d6:bd:9c:d7:d7 Lease:0x66390cbd}
	I0505 15:05:58.009427   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:b2:a5:b1:29:d1:85 ID:1,b2:a5:b1:29:d1:85 Lease:0x6637bb33}
	I0505 15:05:58.009432   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:36:67:2:e8:f4:c1 ID:1,36:67:2:e8:f4:c1 Lease:0x66390c34}
	I0505 15:05:58.009440   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:32:7a:24:40:4c:e9 ID:1,32:7a:24:40:4c:e9 Lease:0x66390c1a}
	I0505 15:05:58.009445   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:8e:b1:47:5c:8f:bb ID:1,8e:b1:47:5c:8f:bb Lease:0x6637ba42}
	I0505 15:05:58.009452   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:42:38:dd:54:2a:ca ID:1,42:38:dd:54:2a:ca Lease:0x66390bf8}
	I0505 15:05:58.009456   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b6:c9:cf:d:ee:b0 ID:1,b6:c9:cf:d:ee:b0 Lease:0x66390be6}
	I0505 15:05:58.009460   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4e:81:5b:49:bb:17 ID:1,4e:81:5b:49:bb:17 Lease:0x663907ac}
	I0505 15:05:58.009465   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:1e:72:bc:80:53:85 ID:1,1e:72:bc:80:53:85 Lease:0x6637b58a}
	I0505 15:05:58.009472   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:4a:8e:d2:30:61 ID:1,de:4a:8e:d2:30:61 Lease:0x663905e7}
	I0505 15:05:58.009497   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x663943e8}
	I0505 15:05:59.787632   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:59 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0505 15:05:59.787645   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:59 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0505 15:05:59.787661   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:59 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0505 15:05:59.812800   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | 2024/05/05 15:05:59 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0505 15:06:00.009878   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Attempt 3
	I0505 15:06:00.009899   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 15:06:00.010083   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | hyperkit pid from json: 58857
	I0505 15:06:00.011445   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Searching for da:31:86:87:68:91 in /var/db/dhcpd_leases ...
	I0505 15:06:00.011626   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Found 76 entries in /var/db/dhcpd_leases!
	I0505 15:06:00.011641   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.77 HWAddress:92:27:6e:8d:99:a5 ID:1,92:27:6e:8d:99:a5 Lease:0x66395420}
	I0505 15:06:00.011654   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.76 HWAddress:6a:5d:2f:35:a:94 ID:1,6a:5d:2f:35:a:94 Lease:0x66380281}
	I0505 15:06:00.011666   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.75 HWAddress:6:88:b5:e0:36:68 ID:1,6:88:b5:e0:36:68 Lease:0x66380258}
	I0505 15:06:00.011677   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.74 HWAddress:f6:db:20:e0:7a:83 ID:1,f6:db:20:e0:7a:83 Lease:0x66380224}
	I0505 15:06:00.011685   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.73 HWAddress:ee:54:24:e4:76:67 ID:1,ee:54:24:e4:76:67 Lease:0x66395366}
	I0505 15:06:00.011695   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.72 HWAddress:ea:8b:c:17:38:4b ID:1,ea:8b:c:17:38:4b Lease:0x66395323}
	I0505 15:06:00.011704   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.71 HWAddress:be:f9:81:f1:c7:d1 ID:1,be:f9:81:f1:c7:d1 Lease:0x66395339}
	I0505 15:06:00.011715   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.70 HWAddress:b6:47:e7:3b:2b:f3 ID:1,b6:47:e7:3b:2b:f3 Lease:0x66380196}
	I0505 15:06:00.011723   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.69 HWAddress:32:cd:dd:53:1a:6e ID:1,32:cd:dd:53:1a:6e Lease:0x66395296}
	I0505 15:06:00.011756   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.68 HWAddress:4a:c4:c7:8a:9b:bd ID:1,4a:c4:c7:8a:9b:bd Lease:0x663951b3}
	I0505 15:06:00.011774   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.67 HWAddress:22:f1:5e:b:9f:88 ID:1,22:f1:5e:b:9f:88 Lease:0x66395143}
	I0505 15:06:00.011808   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.66 HWAddress:8e:ae:1c:5e:d4:a3 ID:1,8e:ae:1c:5e:d4:a3 Lease:0x66395116}
	I0505 15:06:00.011817   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.65 HWAddress:8e:fb:49:90:3:fd ID:1,8e:fb:49:90:3:fd Lease:0x6637ff23}
	I0505 15:06:00.011827   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.64 HWAddress:16:3d:8d:41:14:5b ID:1,16:3d:8d:41:14:5b Lease:0x6637fe9c}
	I0505 15:06:00.011837   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.63 HWAddress:1e:24:a1:ec:61:e0 ID:1,1e:24:a1:ec:61:e0 Lease:0x66395061}
	I0505 15:06:00.011843   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.62 HWAddress:2e:b4:4f:76:ed:b0 ID:1,2e:b4:4f:76:ed:b0 Lease:0x66395037}
	I0505 15:06:00.011855   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.61 HWAddress:9e:35:93:ba:99:63 ID:1,9e:35:93:ba:99:63 Lease:0x6637fcc8}
	I0505 15:06:00.011861   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.60 HWAddress:ba:41:eb:61:2d:8d ID:1,ba:41:eb:61:2d:8d Lease:0x6637fc99}
	I0505 15:06:00.011868   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.59 HWAddress:32:3a:8d:af:e6:e7 ID:1,32:3a:8d:af:e6:e7 Lease:0x6637fc68}
	I0505 15:06:00.011874   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.58 HWAddress:32:21:b4:74:53:fd ID:1,32:21:b4:74:53:fd Lease:0x66394da0}
	I0505 15:06:00.011900   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.57 HWAddress:66:ee:c3:ef:61:3b ID:1,66:ee:c3:ef:61:3b Lease:0x66394d35}
	I0505 15:06:00.011910   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.56 HWAddress:5e:4f:b4:14:62:5 ID:1,5e:4f:b4:14:62:5 Lease:0x66394d07}
	I0505 15:06:00.011918   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.55 HWAddress:f6:52:28:c9:d6:ef ID:1,f6:52:28:c9:d6:ef Lease:0x6637fb7b}
	I0505 15:06:00.011924   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x66394b0c}
	I0505 15:06:00.011931   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x66394a07}
	I0505 15:06:00.011937   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394aea}
	I0505 15:06:00.011943   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x66394ad8}
	I0505 15:06:00.011949   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:52:c5:3c:9d:14:e ID:1,52:c5:3c:9d:14:e Lease:0x663946c7}
	I0505 15:06:00.011962   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:e2:e0:ed:33:dd:5e ID:1,e2:e0:ed:33:dd:5e Lease:0x6637f4a5}
	I0505 15:06:00.011968   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:36:8b:71:f2:1a:8e ID:1,36:8b:71:f2:1a:8e Lease:0x66394428}
	I0505 15:06:00.011975   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:aa:f4:65:9c:ae:46 ID:1,aa:f4:65:9c:ae:46 Lease:0x66391eb5}
	I0505 15:06:00.011981   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:fe:3a:ee:70:a1:f4 ID:1,fe:3a:ee:70:a1:f4 Lease:0x66391e3b}
	I0505 15:06:00.011988   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:a6:ae:ed:1a:27:3 ID:1,a6:ae:ed:1a:27:3 Lease:0x66391d28}
	I0505 15:06:00.011994   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:7e:2a:b5:61:3a:bd ID:1,7e:2a:b5:61:3a:bd Lease:0x66391c4f}
	I0505 15:06:00.012000   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:d6:14:fe:95:d1:bb ID:1,d6:14:fe:95:d1:bb Lease:0x66391c1d}
	I0505 15:06:00.012011   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:2a:be:b1:4e:ea:a ID:1,2a:be:b1:4e:ea:a Lease:0x66391b62}
	I0505 15:06:00.012018   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:da:9f:e2:ec:3c:9b ID:1,da:9f:e2:ec:3c:9b Lease:0x66391b0a}
	I0505 15:06:00.012049   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:d6:4e:a2:a0:55:dc ID:1,d6:4e:a2:a0:55:dc Lease:0x66391af3}
	I0505 15:06:00.012058   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:a6:e3:ff:16:85:55 ID:1,a6:e3:ff:16:85:55 Lease:0x66391a30}
	I0505 15:06:00.012065   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:7a:f6:c5:4:75:e5 ID:1,7a:f6:c5:4:75:e5 Lease:0x66391a1f}
	I0505 15:06:00.012075   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:8e:6b:8b:ff:c9:7 ID:1,8e:6b:8b:ff:c9:7 Lease:0x663919de}
	I0505 15:06:00.012081   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:2:e2:2:6b:6b:57 ID:1,2:e2:2:6b:6b:57 Lease:0x663919c2}
	I0505 15:06:00.012087   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:5a:ae:9c:a1:2c:f3 ID:1,5a:ae:9c:a1:2c:f3 Lease:0x66391973}
	I0505 15:06:00.012101   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:d2:88:f3:9:fe:cd ID:1,d2:88:f3:9:fe:cd Lease:0x66391947}
	I0505 15:06:00.012114   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:66:50:86:0:9c:7f ID:1,66:50:86:0:9c:7f Lease:0x6637c7bc}
	I0505 15:06:00.012121   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:3a:9c:ad:a2:3:6a ID:1,3a:9c:ad:a2:3:6a Lease:0x6637c78e}
	I0505 15:06:00.012138   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:12:b5:58:55:5e:71 ID:1,12:b5:58:55:5e:71 Lease:0x663918d9}
	I0505 15:06:00.012146   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:76:18:d8:55:2a:95 ID:1,76:18:d8:55:2a:95 Lease:0x663918bb}
	I0505 15:06:00.012154   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:82:1f:3c:c0:72:91 ID:1,82:1f:3c:c0:72:91 Lease:0x66391897}
	I0505 15:06:00.012161   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ca:d8:83:72:cb:c4 ID:1,ca:d8:83:72:cb:c4 Lease:0x6639183f}
	I0505 15:06:00.012167   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:32:fd:f9:e7:8:bb ID:1,32:fd:f9:e7:8:bb Lease:0x66391784}
	I0505 15:06:00.012181   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ae:83:a6:bf:23:4b ID:1,ae:83:a6:bf:23:4b Lease:0x66391760}
	I0505 15:06:00.012190   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:82:cc:83:cb:6c:47 ID:1,82:cc:83:cb:6c:47 Lease:0x66391734}
	I0505 15:06:00.012204   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:4a:af:1b:be:ef:ad ID:1,4a:af:1b:be:ef:ad Lease:0x6639170a}
	I0505 15:06:00.012213   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:e6:9b:2e:f9:86:de ID:1,e6:9b:2e:f9:86:de Lease:0x6637c5f9}
	I0505 15:06:00.012220   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:46:a7:e6:29:52:8c ID:1,46:a7:e6:29:52:8c Lease:0x663916cd}
	I0505 15:06:00.012226   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:b2:82:eb:21:cf:b7 ID:1,b2:82:eb:21:cf:b7 Lease:0x6639165f}
	I0505 15:06:00.012233   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:52:d3:86:65:d6:a9 ID:1,52:d3:86:65:d6:a9 Lease:0x663915f0}
	I0505 15:06:00.012239   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:7e:b9:b1:4a:c5:24 ID:1,7e:b9:b1:4a:c5:24 Lease:0x663915c3}
	I0505 15:06:00.012246   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:a:24:1:17:3a:66 ID:1,a:24:1:17:3a:66 Lease:0x6637c34e}
	I0505 15:06:00.012254   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:c5:38:c6:61:9a ID:1,9e:c5:38:c6:61:9a Lease:0x66391514}
	I0505 15:06:00.012280   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:8a:2f:87:5c:8d:d4 ID:1,8a:2f:87:5c:8d:d4 Lease:0x663914ea}
	I0505 15:06:00.012294   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9e:54:9b:5b:ae:fb ID:1,9e:54:9b:5b:ae:fb Lease:0x66390dcf}
	I0505 15:06:00.012301   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:12:1b:d2:24:1f:f6 ID:1,12:1b:d2:24:1f:f6 Lease:0x6637bc3f}
	I0505 15:06:00.012309   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f6:e7:d9:1:6b:86 ID:1,f6:e7:d9:1:6b:86 Lease:0x66390d76}
	I0505 15:06:00.012316   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:d6:bd:9c:d7:d7 ID:1,d6:d6:bd:9c:d7:d7 Lease:0x66390cbd}
	I0505 15:06:00.012324   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:b2:a5:b1:29:d1:85 ID:1,b2:a5:b1:29:d1:85 Lease:0x6637bb33}
	I0505 15:06:00.012331   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:36:67:2:e8:f4:c1 ID:1,36:67:2:e8:f4:c1 Lease:0x66390c34}
	I0505 15:06:00.012339   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:32:7a:24:40:4c:e9 ID:1,32:7a:24:40:4c:e9 Lease:0x66390c1a}
	I0505 15:06:00.012346   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:8e:b1:47:5c:8f:bb ID:1,8e:b1:47:5c:8f:bb Lease:0x6637ba42}
	I0505 15:06:00.012352   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:42:38:dd:54:2a:ca ID:1,42:38:dd:54:2a:ca Lease:0x66390bf8}
	I0505 15:06:00.012373   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b6:c9:cf:d:ee:b0 ID:1,b6:c9:cf:d:ee:b0 Lease:0x66390be6}
	I0505 15:06:00.012385   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4e:81:5b:49:bb:17 ID:1,4e:81:5b:49:bb:17 Lease:0x663907ac}
	I0505 15:06:00.012393   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:1e:72:bc:80:53:85 ID:1,1e:72:bc:80:53:85 Lease:0x6637b58a}
	I0505 15:06:00.012401   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:4a:8e:d2:30:61 ID:1,de:4a:8e:d2:30:61 Lease:0x663905e7}
	I0505 15:06:00.012415   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x663943e8}
	I0505 15:06:02.012120   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Attempt 4
	I0505 15:06:02.012131   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 15:06:02.012237   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | hyperkit pid from json: 58857
	I0505 15:06:02.012992   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Searching for da:31:86:87:68:91 in /var/db/dhcpd_leases ...
	I0505 15:06:02.013090   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Found 76 entries in /var/db/dhcpd_leases!
	I0505 15:06:02.013097   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.77 HWAddress:92:27:6e:8d:99:a5 ID:1,92:27:6e:8d:99:a5 Lease:0x66395420}
	I0505 15:06:02.013105   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.76 HWAddress:6a:5d:2f:35:a:94 ID:1,6a:5d:2f:35:a:94 Lease:0x66380281}
	I0505 15:06:02.013109   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.75 HWAddress:6:88:b5:e0:36:68 ID:1,6:88:b5:e0:36:68 Lease:0x66380258}
	I0505 15:06:02.013117   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.74 HWAddress:f6:db:20:e0:7a:83 ID:1,f6:db:20:e0:7a:83 Lease:0x66380224}
	I0505 15:06:02.013123   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.73 HWAddress:ee:54:24:e4:76:67 ID:1,ee:54:24:e4:76:67 Lease:0x66395366}
	I0505 15:06:02.013131   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.72 HWAddress:ea:8b:c:17:38:4b ID:1,ea:8b:c:17:38:4b Lease:0x66395323}
	I0505 15:06:02.013139   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.71 HWAddress:be:f9:81:f1:c7:d1 ID:1,be:f9:81:f1:c7:d1 Lease:0x66395339}
	I0505 15:06:02.013143   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.70 HWAddress:b6:47:e7:3b:2b:f3 ID:1,b6:47:e7:3b:2b:f3 Lease:0x66380196}
	I0505 15:06:02.013149   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.69 HWAddress:32:cd:dd:53:1a:6e ID:1,32:cd:dd:53:1a:6e Lease:0x66395296}
	I0505 15:06:02.013154   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.68 HWAddress:4a:c4:c7:8a:9b:bd ID:1,4a:c4:c7:8a:9b:bd Lease:0x663951b3}
	I0505 15:06:02.013173   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.67 HWAddress:22:f1:5e:b:9f:88 ID:1,22:f1:5e:b:9f:88 Lease:0x66395143}
	I0505 15:06:02.013186   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.66 HWAddress:8e:ae:1c:5e:d4:a3 ID:1,8e:ae:1c:5e:d4:a3 Lease:0x66395116}
	I0505 15:06:02.013192   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.65 HWAddress:8e:fb:49:90:3:fd ID:1,8e:fb:49:90:3:fd Lease:0x6637ff23}
	I0505 15:06:02.013197   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.64 HWAddress:16:3d:8d:41:14:5b ID:1,16:3d:8d:41:14:5b Lease:0x6637fe9c}
	I0505 15:06:02.013202   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.63 HWAddress:1e:24:a1:ec:61:e0 ID:1,1e:24:a1:ec:61:e0 Lease:0x66395061}
	I0505 15:06:02.013206   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.62 HWAddress:2e:b4:4f:76:ed:b0 ID:1,2e:b4:4f:76:ed:b0 Lease:0x66395037}
	I0505 15:06:02.013215   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.61 HWAddress:9e:35:93:ba:99:63 ID:1,9e:35:93:ba:99:63 Lease:0x6637fcc8}
	I0505 15:06:02.013220   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.60 HWAddress:ba:41:eb:61:2d:8d ID:1,ba:41:eb:61:2d:8d Lease:0x6637fc99}
	I0505 15:06:02.013226   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.59 HWAddress:32:3a:8d:af:e6:e7 ID:1,32:3a:8d:af:e6:e7 Lease:0x6637fc68}
	I0505 15:06:02.013230   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.58 HWAddress:32:21:b4:74:53:fd ID:1,32:21:b4:74:53:fd Lease:0x66394da0}
	I0505 15:06:02.013246   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.57 HWAddress:66:ee:c3:ef:61:3b ID:1,66:ee:c3:ef:61:3b Lease:0x66394d35}
	I0505 15:06:02.013250   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.56 HWAddress:5e:4f:b4:14:62:5 ID:1,5e:4f:b4:14:62:5 Lease:0x66394d07}
	I0505 15:06:02.013260   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.55 HWAddress:f6:52:28:c9:d6:ef ID:1,f6:52:28:c9:d6:ef Lease:0x6637fb7b}
	I0505 15:06:02.013265   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:f6:fa:b5:fe:20:2f ID:1,f6:fa:b5:fe:20:2f Lease:0x66394b0c}
	I0505 15:06:02.013270   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:ce:17:a:56:1e:f8 ID:1,ce:17:a:56:1e:f8 Lease:0x66394a07}
	I0505 15:06:02.013275   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:92:83:2c:36:f7:7d ID:1,92:83:2c:36:f7:7d Lease:0x66394aea}
	I0505 15:06:02.013293   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:72:52:a3:7d:5c:d1 ID:1,72:52:a3:7d:5c:d1 Lease:0x66394ad8}
	I0505 15:06:02.013302   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:52:c5:3c:9d:14:e ID:1,52:c5:3c:9d:14:e Lease:0x663946c7}
	I0505 15:06:02.013308   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:e2:e0:ed:33:dd:5e ID:1,e2:e0:ed:33:dd:5e Lease:0x6637f4a5}
	I0505 15:06:02.013313   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:36:8b:71:f2:1a:8e ID:1,36:8b:71:f2:1a:8e Lease:0x66394428}
	I0505 15:06:02.013317   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:aa:f4:65:9c:ae:46 ID:1,aa:f4:65:9c:ae:46 Lease:0x66391eb5}
	I0505 15:06:02.013323   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:fe:3a:ee:70:a1:f4 ID:1,fe:3a:ee:70:a1:f4 Lease:0x66391e3b}
	I0505 15:06:02.013329   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:a6:ae:ed:1a:27:3 ID:1,a6:ae:ed:1a:27:3 Lease:0x66391d28}
	I0505 15:06:02.013334   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:7e:2a:b5:61:3a:bd ID:1,7e:2a:b5:61:3a:bd Lease:0x66391c4f}
	I0505 15:06:02.013339   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:d6:14:fe:95:d1:bb ID:1,d6:14:fe:95:d1:bb Lease:0x66391c1d}
	I0505 15:06:02.013345   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:2a:be:b1:4e:ea:a ID:1,2a:be:b1:4e:ea:a Lease:0x66391b62}
	I0505 15:06:02.013353   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:da:9f:e2:ec:3c:9b ID:1,da:9f:e2:ec:3c:9b Lease:0x66391b0a}
	I0505 15:06:02.013357   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:d6:4e:a2:a0:55:dc ID:1,d6:4e:a2:a0:55:dc Lease:0x66391af3}
	I0505 15:06:02.013362   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:a6:e3:ff:16:85:55 ID:1,a6:e3:ff:16:85:55 Lease:0x66391a30}
	I0505 15:06:02.013369   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:7a:f6:c5:4:75:e5 ID:1,7a:f6:c5:4:75:e5 Lease:0x66391a1f}
	I0505 15:06:02.013373   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:8e:6b:8b:ff:c9:7 ID:1,8e:6b:8b:ff:c9:7 Lease:0x663919de}
	I0505 15:06:02.013391   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:2:e2:2:6b:6b:57 ID:1,2:e2:2:6b:6b:57 Lease:0x663919c2}
	I0505 15:06:02.013398   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:5a:ae:9c:a1:2c:f3 ID:1,5a:ae:9c:a1:2c:f3 Lease:0x66391973}
	I0505 15:06:02.013404   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:d2:88:f3:9:fe:cd ID:1,d2:88:f3:9:fe:cd Lease:0x66391947}
	I0505 15:06:02.013408   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:66:50:86:0:9c:7f ID:1,66:50:86:0:9c:7f Lease:0x6637c7bc}
	I0505 15:06:02.013420   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:3a:9c:ad:a2:3:6a ID:1,3a:9c:ad:a2:3:6a Lease:0x6637c78e}
	I0505 15:06:02.013430   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:12:b5:58:55:5e:71 ID:1,12:b5:58:55:5e:71 Lease:0x663918d9}
	I0505 15:06:02.013437   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:76:18:d8:55:2a:95 ID:1,76:18:d8:55:2a:95 Lease:0x663918bb}
	I0505 15:06:02.013441   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:82:1f:3c:c0:72:91 ID:1,82:1f:3c:c0:72:91 Lease:0x66391897}
	I0505 15:06:02.013448   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:ca:d8:83:72:cb:c4 ID:1,ca:d8:83:72:cb:c4 Lease:0x6639183f}
	I0505 15:06:02.013452   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:32:fd:f9:e7:8:bb ID:1,32:fd:f9:e7:8:bb Lease:0x66391784}
	I0505 15:06:02.013457   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ae:83:a6:bf:23:4b ID:1,ae:83:a6:bf:23:4b Lease:0x66391760}
	I0505 15:06:02.013461   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:82:cc:83:cb:6c:47 ID:1,82:cc:83:cb:6c:47 Lease:0x66391734}
	I0505 15:06:02.013472   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:4a:af:1b:be:ef:ad ID:1,4a:af:1b:be:ef:ad Lease:0x6639170a}
	I0505 15:06:02.013480   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:e6:9b:2e:f9:86:de ID:1,e6:9b:2e:f9:86:de Lease:0x6637c5f9}
	I0505 15:06:02.013487   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:46:a7:e6:29:52:8c ID:1,46:a7:e6:29:52:8c Lease:0x663916cd}
	I0505 15:06:02.013492   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:b2:82:eb:21:cf:b7 ID:1,b2:82:eb:21:cf:b7 Lease:0x6639165f}
	I0505 15:06:02.013496   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:52:d3:86:65:d6:a9 ID:1,52:d3:86:65:d6:a9 Lease:0x663915f0}
	I0505 15:06:02.013507   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:7e:b9:b1:4a:c5:24 ID:1,7e:b9:b1:4a:c5:24 Lease:0x663915c3}
	I0505 15:06:02.013515   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:a:24:1:17:3a:66 ID:1,a:24:1:17:3a:66 Lease:0x6637c34e}
	I0505 15:06:02.013520   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:9e:c5:38:c6:61:9a ID:1,9e:c5:38:c6:61:9a Lease:0x66391514}
	I0505 15:06:02.013533   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:8a:2f:87:5c:8d:d4 ID:1,8a:2f:87:5c:8d:d4 Lease:0x663914ea}
	I0505 15:06:02.013540   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:9e:54:9b:5b:ae:fb ID:1,9e:54:9b:5b:ae:fb Lease:0x66390dcf}
	I0505 15:06:02.013546   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:12:1b:d2:24:1f:f6 ID:1,12:1b:d2:24:1f:f6 Lease:0x6637bc3f}
	I0505 15:06:02.013550   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:f6:e7:d9:1:6b:86 ID:1,f6:e7:d9:1:6b:86 Lease:0x66390d76}
	I0505 15:06:02.013554   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:d6:d6:bd:9c:d7:d7 ID:1,d6:d6:bd:9c:d7:d7 Lease:0x66390cbd}
	I0505 15:06:02.013559   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:b2:a5:b1:29:d1:85 ID:1,b2:a5:b1:29:d1:85 Lease:0x6637bb33}
	I0505 15:06:02.013563   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:36:67:2:e8:f4:c1 ID:1,36:67:2:e8:f4:c1 Lease:0x66390c34}
	I0505 15:06:02.013574   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:32:7a:24:40:4c:e9 ID:1,32:7a:24:40:4c:e9 Lease:0x66390c1a}
	I0505 15:06:02.013589   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:8e:b1:47:5c:8f:bb ID:1,8e:b1:47:5c:8f:bb Lease:0x6637ba42}
	I0505 15:06:02.013595   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:42:38:dd:54:2a:ca ID:1,42:38:dd:54:2a:ca Lease:0x66390bf8}
	I0505 15:06:02.013601   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b6:c9:cf:d:ee:b0 ID:1,b6:c9:cf:d:ee:b0 Lease:0x66390be6}
	I0505 15:06:02.013606   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4e:81:5b:49:bb:17 ID:1,4e:81:5b:49:bb:17 Lease:0x663907ac}
	I0505 15:06:02.013610   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:1e:72:bc:80:53:85 ID:1,1e:72:bc:80:53:85 Lease:0x6637b58a}
	I0505 15:06:02.013616   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:4a:8e:d2:30:61 ID:1,de:4a:8e:d2:30:61 Lease:0x663905e7}
	I0505 15:06:02.013624   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x663943e8}
	I0505 15:06:04.014418   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Attempt 5
	I0505 15:06:04.014426   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 15:06:04.014533   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | hyperkit pid from json: 58857
	I0505 15:06:04.015332   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Searching for da:31:86:87:68:91 in /var/db/dhcpd_leases ...
	I0505 15:06:04.015416   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Found 77 entries in /var/db/dhcpd_leases!
	I0505 15:06:04.015425   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.78 HWAddress:da:31:86:87:68:91 ID:1,da:31:86:87:68:91 Lease:0x6639544a}
	I0505 15:06:04.015431   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Found match: da:31:86:87:68:91
	I0505 15:06:04.015434   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | IP: 192.169.0.78
	I0505 15:06:04.015503   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetConfigRaw
	I0505 15:06:04.016097   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .DriverName
	I0505 15:06:04.016210   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .DriverName
	I0505 15:06:04.016311   58847 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0505 15:06:04.016316   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetState
	I0505 15:06:04.016398   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 15:06:04.016455   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | hyperkit pid from json: 58857
	I0505 15:06:04.017231   58847 main.go:141] libmachine: Detecting operating system of created instance...
	I0505 15:06:04.017244   58847 main.go:141] libmachine: Waiting for SSH to be available...
	I0505 15:06:04.017249   58847 main.go:141] libmachine: Getting to WaitForSSH function...
	I0505 15:06:04.017252   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHHostname
	I0505 15:06:04.017343   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHPort
	I0505 15:06:04.017431   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:04.017498   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:04.017576   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHUsername
	I0505 15:06:04.017695   58847 main.go:141] libmachine: Using SSH client type: native
	I0505 15:06:04.017884   58847 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe57bb80] 0xe57e8e0 <nil>  [] 0s} 192.169.0.78 22 <nil> <nil>}
	I0505 15:06:04.017887   58847 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0505 15:06:05.082467   58847 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 15:06:05.082476   58847 main.go:141] libmachine: Detecting the provisioner...
	I0505 15:06:05.082480   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHHostname
	I0505 15:06:05.082611   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHPort
	I0505 15:06:05.082688   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:05.082782   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:05.082861   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHUsername
	I0505 15:06:05.083004   58847 main.go:141] libmachine: Using SSH client type: native
	I0505 15:06:05.083158   58847 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe57bb80] 0xe57e8e0 <nil>  [] 0s} 192.169.0.78 22 <nil> <nil>}
	I0505 15:06:05.083162   58847 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0505 15:06:05.147127   58847 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0505 15:06:05.147175   58847 main.go:141] libmachine: found compatible host: buildroot
	I0505 15:06:05.147179   58847 main.go:141] libmachine: Provisioning with buildroot...
	I0505 15:06:05.147182   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetMachineName
	I0505 15:06:05.147328   58847 buildroot.go:166] provisioning hostname "cert-expiration-724000"
	I0505 15:06:05.147336   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetMachineName
	I0505 15:06:05.147435   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHHostname
	I0505 15:06:05.147499   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHPort
	I0505 15:06:05.147575   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:05.147659   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:05.147756   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHUsername
	I0505 15:06:05.147917   58847 main.go:141] libmachine: Using SSH client type: native
	I0505 15:06:05.148053   58847 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe57bb80] 0xe57e8e0 <nil>  [] 0s} 192.169.0.78 22 <nil> <nil>}
	I0505 15:06:05.148058   58847 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-724000 && echo "cert-expiration-724000" | sudo tee /etc/hostname
	I0505 15:06:05.220865   58847 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-724000
	
	I0505 15:06:05.220879   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHHostname
	I0505 15:06:05.221007   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHPort
	I0505 15:06:05.221104   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:05.221184   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:05.221275   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHUsername
	I0505 15:06:05.221387   58847 main.go:141] libmachine: Using SSH client type: native
	I0505 15:06:05.221545   58847 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe57bb80] 0xe57e8e0 <nil>  [] 0s} 192.169.0.78 22 <nil> <nil>}
	I0505 15:06:05.221553   58847 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-724000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-724000/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-724000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0505 15:06:05.291806   58847 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0505 15:06:05.291820   58847 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18602-53665/.minikube CaCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18602-53665/.minikube}
	I0505 15:06:05.291833   58847 buildroot.go:174] setting up certificates
	I0505 15:06:05.291847   58847 provision.go:84] configureAuth start
	I0505 15:06:05.291857   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetMachineName
	I0505 15:06:05.291979   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetIP
	I0505 15:06:05.292074   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHHostname
	I0505 15:06:05.292167   58847 provision.go:143] copyHostCerts
	I0505 15:06:05.292265   58847 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem, removing ...
	I0505 15:06:05.292272   58847 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem
	I0505 15:06:05.292470   58847 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/key.pem (1679 bytes)
	I0505 15:06:05.292693   58847 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem, removing ...
	I0505 15:06:05.292696   58847 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem
	I0505 15:06:05.292789   58847 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.pem (1078 bytes)
	I0505 15:06:05.292958   58847 exec_runner.go:144] found /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem, removing ...
	I0505 15:06:05.292961   58847 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem
	I0505 15:06:05.293058   58847 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18602-53665/.minikube/cert.pem (1123 bytes)
	I0505 15:06:05.293213   58847 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-724000 san=[127.0.0.1 192.169.0.78 cert-expiration-724000 localhost minikube]
	I0505 15:06:05.445709   58847 provision.go:177] copyRemoteCerts
	I0505 15:06:05.445770   58847 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0505 15:06:05.445784   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHHostname
	I0505 15:06:05.445930   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHPort
	I0505 15:06:05.446025   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:05.446111   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHUsername
	I0505 15:06:05.446189   58847 sshutil.go:53] new ssh client: &{IP:192.169.0.78 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/id_rsa Username:docker}
	I0505 15:06:05.486192   58847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0505 15:06:05.505888   58847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0505 15:06:05.524932   58847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0505 15:06:05.555523   58847 provision.go:87] duration metric: took 263.662202ms to configureAuth
	I0505 15:06:05.555543   58847 buildroot.go:189] setting minikube options for container-runtime
	I0505 15:06:05.555678   58847 config.go:182] Loaded profile config "cert-expiration-724000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 15:06:05.555688   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .DriverName
	I0505 15:06:05.555830   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHHostname
	I0505 15:06:05.555926   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHPort
	I0505 15:06:05.556015   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:05.556098   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:05.556182   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHUsername
	I0505 15:06:05.556298   58847 main.go:141] libmachine: Using SSH client type: native
	I0505 15:06:05.556447   58847 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe57bb80] 0xe57e8e0 <nil>  [] 0s} 192.169.0.78 22 <nil> <nil>}
	I0505 15:06:05.556451   58847 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0505 15:06:05.626625   58847 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0505 15:06:05.626635   58847 buildroot.go:70] root file system type: tmpfs
	I0505 15:06:05.626698   58847 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0505 15:06:05.626710   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHHostname
	I0505 15:06:05.626840   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHPort
	I0505 15:06:05.626918   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:05.626981   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:05.627059   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHUsername
	I0505 15:06:05.627181   58847 main.go:141] libmachine: Using SSH client type: native
	I0505 15:06:05.627340   58847 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe57bb80] 0xe57e8e0 <nil>  [] 0s} 192.169.0.78 22 <nil> <nil>}
	I0505 15:06:05.627379   58847 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0505 15:06:05.703471   58847 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0505 15:06:05.703488   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHHostname
	I0505 15:06:05.703613   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHPort
	I0505 15:06:05.703708   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:05.703784   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:05.703860   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHUsername
	I0505 15:06:05.703984   58847 main.go:141] libmachine: Using SSH client type: native
	I0505 15:06:05.704122   58847 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe57bb80] 0xe57e8e0 <nil>  [] 0s} 192.169.0.78 22 <nil> <nil>}
	I0505 15:06:05.704131   58847 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0505 15:06:07.263030   58847 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0505 15:06:07.263047   58847 main.go:141] libmachine: Checking connection to Docker...
	I0505 15:06:07.263053   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetURL
	I0505 15:06:07.263186   58847 main.go:141] libmachine: Docker is up and running!
	I0505 15:06:07.263189   58847 main.go:141] libmachine: Reticulating splines...
	I0505 15:06:07.263193   58847 client.go:171] duration metric: took 13.876987984s to LocalClient.Create
	I0505 15:06:07.263201   58847 start.go:167] duration metric: took 13.877026449s to libmachine.API.Create "cert-expiration-724000"
	I0505 15:06:07.263205   58847 start.go:293] postStartSetup for "cert-expiration-724000" (driver="hyperkit")
	I0505 15:06:07.263217   58847 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0505 15:06:07.263225   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .DriverName
	I0505 15:06:07.263359   58847 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0505 15:06:07.263369   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHHostname
	I0505 15:06:07.263490   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHPort
	I0505 15:06:07.263586   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:07.263679   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHUsername
	I0505 15:06:07.263764   58847 sshutil.go:53] new ssh client: &{IP:192.169.0.78 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/id_rsa Username:docker}
	I0505 15:06:07.302038   58847 ssh_runner.go:195] Run: cat /etc/os-release
	I0505 15:06:07.305229   58847 info.go:137] Remote host: Buildroot 2023.02.9
	I0505 15:06:07.305238   58847 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/addons for local assets ...
	I0505 15:06:07.305336   58847 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18602-53665/.minikube/files for local assets ...
	I0505 15:06:07.305519   58847 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem -> 542102.pem in /etc/ssl/certs
	I0505 15:06:07.305716   58847 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0505 15:06:07.312865   58847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /etc/ssl/certs/542102.pem (1708 bytes)
	I0505 15:06:07.332850   58847 start.go:296] duration metric: took 69.638883ms for postStartSetup
	I0505 15:06:07.332872   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetConfigRaw
	I0505 15:06:07.333526   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetIP
	I0505 15:06:07.333677   58847 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/config.json ...
	I0505 15:06:07.333995   58847 start.go:128] duration metric: took 13.980900943s to createHost
	I0505 15:06:07.334007   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHHostname
	I0505 15:06:07.334101   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHPort
	I0505 15:06:07.334168   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:07.334234   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:07.334314   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHUsername
	I0505 15:06:07.334402   58847 main.go:141] libmachine: Using SSH client type: native
	I0505 15:06:07.334528   58847 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xe57bb80] 0xe57e8e0 <nil>  [] 0s} 192.169.0.78 22 <nil> <nil>}
	I0505 15:06:07.334532   58847 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0505 15:06:07.399426   58847 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714946767.596994721
	
	I0505 15:06:07.399432   58847 fix.go:216] guest clock: 1714946767.596994721
	I0505 15:06:07.399436   58847 fix.go:229] Guest: 2024-05-05 15:06:07.596994721 -0700 PDT Remote: 2024-05-05 15:06:07.334 -0700 PDT m=+14.463359722 (delta=262.994721ms)
	I0505 15:06:07.399455   58847 fix.go:200] guest clock delta is within tolerance: 262.994721ms
	I0505 15:06:07.399457   58847 start.go:83] releasing machines lock for "cert-expiration-724000", held for 14.046493185s
	I0505 15:06:07.399473   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .DriverName
	I0505 15:06:07.399597   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetIP
	I0505 15:06:07.399720   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .DriverName
	I0505 15:06:07.400064   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .DriverName
	I0505 15:06:07.400164   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .DriverName
	I0505 15:06:07.400236   58847 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0505 15:06:07.400260   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHHostname
	I0505 15:06:07.400294   58847 ssh_runner.go:195] Run: cat /version.json
	I0505 15:06:07.400302   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHHostname
	I0505 15:06:07.400353   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHPort
	I0505 15:06:07.400387   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHPort
	I0505 15:06:07.400438   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:07.400465   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:07.400515   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHUsername
	I0505 15:06:07.400551   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHUsername
	I0505 15:06:07.400589   58847 sshutil.go:53] new ssh client: &{IP:192.169.0.78 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/id_rsa Username:docker}
	I0505 15:06:07.400624   58847 sshutil.go:53] new ssh client: &{IP:192.169.0.78 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/id_rsa Username:docker}
	I0505 15:06:07.435027   58847 ssh_runner.go:195] Run: systemctl --version
	I0505 15:06:07.501034   58847 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0505 15:06:07.505934   58847 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0505 15:06:07.505969   58847 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0505 15:06:07.519631   58847 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0505 15:06:07.519641   58847 start.go:494] detecting cgroup driver to use...
	I0505 15:06:07.519742   58847 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 15:06:07.534734   58847 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0505 15:06:07.543898   58847 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0505 15:06:07.552681   58847 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0505 15:06:07.552725   58847 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0505 15:06:07.561554   58847 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 15:06:07.574394   58847 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0505 15:06:07.584349   58847 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0505 15:06:07.592998   58847 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0505 15:06:07.605365   58847 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0505 15:06:07.619306   58847 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0505 15:06:07.630301   58847 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0505 15:06:07.641065   58847 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0505 15:06:07.650218   58847 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0505 15:06:07.658463   58847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 15:06:07.752554   58847 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0505 15:06:07.771298   58847 start.go:494] detecting cgroup driver to use...
	I0505 15:06:07.771368   58847 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0505 15:06:07.784951   58847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 15:06:07.796489   58847 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0505 15:06:07.810538   58847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0505 15:06:07.820677   58847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 15:06:07.831052   58847 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0505 15:06:07.874217   58847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0505 15:06:07.884485   58847 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0505 15:06:07.899494   58847 ssh_runner.go:195] Run: which cri-dockerd
	I0505 15:06:07.902358   58847 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0505 15:06:07.909365   58847 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0505 15:06:07.922687   58847 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0505 15:06:08.022769   58847 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0505 15:06:08.133491   58847 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0505 15:06:08.133552   58847 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0505 15:06:08.148182   58847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 15:06:08.257162   58847 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 15:06:10.534544   58847 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.277379329s)
	I0505 15:06:10.534613   58847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0505 15:06:10.545997   58847 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0505 15:06:10.559743   58847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 15:06:10.570821   58847 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0505 15:06:10.671891   58847 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0505 15:06:10.780024   58847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 15:06:10.875470   58847 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0505 15:06:10.888993   58847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0505 15:06:10.899968   58847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 15:06:10.999838   58847 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0505 15:06:11.060081   58847 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0505 15:06:11.060147   58847 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0505 15:06:11.064337   58847 start.go:562] Will wait 60s for crictl version
	I0505 15:06:11.064378   58847 ssh_runner.go:195] Run: which crictl
	I0505 15:06:11.067441   58847 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0505 15:06:11.096661   58847 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0505 15:06:11.096737   58847 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 15:06:11.112051   58847 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0505 15:06:11.151011   58847 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0505 15:06:11.151071   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetIP
	I0505 15:06:11.151448   58847 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0505 15:06:11.155841   58847 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 15:06:11.165446   58847 kubeadm.go:877] updating cluster {Name:cert-expiration-724000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.0 ClusterName:cert-expiration-724000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.78 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0505 15:06:11.165510   58847 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 15:06:11.165567   58847 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0505 15:06:11.176932   58847 docker.go:685] Got preloaded images: 
	I0505 15:06:11.176939   58847 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0505 15:06:11.176984   58847 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0505 15:06:11.184507   58847 ssh_runner.go:195] Run: which lz4
	I0505 15:06:11.187417   58847 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0505 15:06:11.190407   58847 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0505 15:06:11.190419   58847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0505 15:06:12.261328   58847 docker.go:649] duration metric: took 1.073961918s to copy over tarball
	I0505 15:06:12.261386   58847 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0505 15:06:14.565937   58847 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.304552779s)
	I0505 15:06:14.565946   58847 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0505 15:06:14.592027   58847 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0505 15:06:14.599830   58847 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0505 15:06:14.613434   58847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 15:06:14.707428   58847 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0505 15:06:16.999509   58847 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.292083467s)
	I0505 15:06:16.999594   58847 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0505 15:06:17.012611   58847 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0505 15:06:17.012629   58847 cache_images.go:84] Images are preloaded, skipping loading
	I0505 15:06:17.012639   58847 kubeadm.go:928] updating node { 192.169.0.78 8443 v1.30.0 docker true true} ...
	I0505 15:06:17.012713   58847 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-724000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:cert-expiration-724000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0505 15:06:17.012778   58847 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0505 15:06:17.031271   58847 cni.go:84] Creating CNI manager for ""
	I0505 15:06:17.031285   58847 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 15:06:17.031297   58847 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0505 15:06:17.031309   58847 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.78 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-724000 NodeName:cert-expiration-724000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0505 15:06:17.031390   58847 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "cert-expiration-724000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0505 15:06:17.031448   58847 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0505 15:06:17.039587   58847 binaries.go:44] Found k8s binaries, skipping transfer
	I0505 15:06:17.039631   58847 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0505 15:06:17.047413   58847 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0505 15:06:17.060929   58847 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0505 15:06:17.074499   58847 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0505 15:06:17.088757   58847 ssh_runner.go:195] Run: grep 192.169.0.78	control-plane.minikube.internal$ /etc/hosts
	I0505 15:06:17.091623   58847 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0505 15:06:17.101786   58847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 15:06:17.207151   58847 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 15:06:17.221409   58847 certs.go:68] Setting up /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000 for IP: 192.169.0.78
	I0505 15:06:17.221416   58847 certs.go:194] generating shared ca certs ...
	I0505 15:06:17.221425   58847 certs.go:226] acquiring lock for ca certs: {Name:mk4a4c4cb11dfd06f304e9c6007de9e5e149a466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 15:06:17.221601   58847 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key
	I0505 15:06:17.221686   58847 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key
	I0505 15:06:17.221693   58847 certs.go:256] generating profile certs ...
	I0505 15:06:17.221742   58847 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/client.key
	I0505 15:06:17.221751   58847 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/client.crt with IP's: []
	I0505 15:06:17.348507   58847 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/client.crt ...
	I0505 15:06:17.348516   58847 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/client.crt: {Name:mkc8a732012ee69d3021e892482e8ad62f241176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 15:06:17.348848   58847 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/client.key ...
	I0505 15:06:17.348854   58847 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/client.key: {Name:mkeaa2f62a8c4ef698b3f8aa2dc19be0d12c9c57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 15:06:17.349072   58847 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/apiserver.key.10775e61
	I0505 15:06:17.349087   58847 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/apiserver.crt.10775e61 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.78]
	I0505 15:06:17.484580   58847 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/apiserver.crt.10775e61 ...
	I0505 15:06:17.484590   58847 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/apiserver.crt.10775e61: {Name:mkbb600fb85e0388a40ff68bcf2637ab816e71dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 15:06:17.484920   58847 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/apiserver.key.10775e61 ...
	I0505 15:06:17.484931   58847 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/apiserver.key.10775e61: {Name:mk2e05649b7db8b24ec83e07d96a078286d9f62f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 15:06:17.485169   58847 certs.go:381] copying /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/apiserver.crt.10775e61 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/apiserver.crt
	I0505 15:06:17.485377   58847 certs.go:385] copying /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/apiserver.key.10775e61 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/apiserver.key
	I0505 15:06:17.485541   58847 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/proxy-client.key
	I0505 15:06:17.485553   58847 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/proxy-client.crt with IP's: []
	I0505 15:06:17.542165   58847 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/proxy-client.crt ...
	I0505 15:06:17.542174   58847 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/proxy-client.crt: {Name:mkb81b9d0d1526ddb3e8b880a2b38ede337ce2a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 15:06:17.542413   58847 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/proxy-client.key ...
	I0505 15:06:17.542423   58847 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/proxy-client.key: {Name:mk4216f9866149954ef01dcc229b969199764c4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 15:06:17.542820   58847 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem (1338 bytes)
	W0505 15:06:17.542863   58847 certs.go:480] ignoring /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210_empty.pem, impossibly tiny 0 bytes
	I0505 15:06:17.542870   58847 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca-key.pem (1675 bytes)
	I0505 15:06:17.542896   58847 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/ca.pem (1078 bytes)
	I0505 15:06:17.542921   58847 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/cert.pem (1123 bytes)
	I0505 15:06:17.542950   58847 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/key.pem (1679 bytes)
	I0505 15:06:17.543011   58847 certs.go:484] found cert: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem (1708 bytes)
	I0505 15:06:17.543521   58847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0505 15:06:17.564334   58847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0505 15:06:17.583420   58847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0505 15:06:17.602311   58847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0505 15:06:17.621291   58847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0505 15:06:17.640938   58847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0505 15:06:17.660397   58847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0505 15:06:17.679541   58847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/cert-expiration-724000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0505 15:06:17.699487   58847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0505 15:06:17.719408   58847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/certs/54210.pem --> /usr/share/ca-certificates/54210.pem (1338 bytes)
	I0505 15:06:17.738687   58847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/ssl/certs/542102.pem --> /usr/share/ca-certificates/542102.pem (1708 bytes)
	I0505 15:06:17.757604   58847 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0505 15:06:17.771111   58847 ssh_runner.go:195] Run: openssl version
	I0505 15:06:17.775265   58847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/542102.pem && ln -fs /usr/share/ca-certificates/542102.pem /etc/ssl/certs/542102.pem"
	I0505 15:06:17.785002   58847 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/542102.pem
	I0505 15:06:17.788403   58847 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  5 21:08 /usr/share/ca-certificates/542102.pem
	I0505 15:06:17.788438   58847 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/542102.pem
	I0505 15:06:17.792620   58847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/542102.pem /etc/ssl/certs/3ec20f2e.0"
	I0505 15:06:17.801971   58847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0505 15:06:17.811052   58847 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0505 15:06:17.814492   58847 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  5 20:59 /usr/share/ca-certificates/minikubeCA.pem
	I0505 15:06:17.814530   58847 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0505 15:06:17.818634   58847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0505 15:06:17.827770   58847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/54210.pem && ln -fs /usr/share/ca-certificates/54210.pem /etc/ssl/certs/54210.pem"
	I0505 15:06:17.836859   58847 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/54210.pem
	I0505 15:06:17.840222   58847 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  5 21:08 /usr/share/ca-certificates/54210.pem
	I0505 15:06:17.840256   58847 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/54210.pem
	I0505 15:06:17.844374   58847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/54210.pem /etc/ssl/certs/51391683.0"
	I0505 15:06:17.853444   58847 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0505 15:06:17.856763   58847 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0505 15:06:17.856804   58847 kubeadm.go:391] StartCluster: {Name:cert-expiration-724000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.30.0 ClusterName:cert-expiration-724000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.78 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 15:06:17.856900   58847 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0505 15:06:17.867076   58847 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0505 15:06:17.875452   58847 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0505 15:06:17.883673   58847 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0505 15:06:17.891839   58847 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0505 15:06:17.891843   58847 kubeadm.go:156] found existing configuration files:
	
	I0505 15:06:17.891877   58847 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0505 15:06:17.899660   58847 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0505 15:06:17.899723   58847 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0505 15:06:17.907657   58847 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0505 15:06:17.915268   58847 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0505 15:06:17.915299   58847 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0505 15:06:17.923319   58847 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0505 15:06:17.933308   58847 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0505 15:06:17.933360   58847 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0505 15:06:17.943166   58847 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0505 15:06:17.950754   58847 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0505 15:06:17.950814   58847 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0505 15:06:17.958764   58847 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0505 15:06:18.138044   58847 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0505 15:06:27.880061   58847 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0505 15:06:27.880103   58847 kubeadm.go:309] [preflight] Running pre-flight checks
	I0505 15:06:27.880160   58847 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0505 15:06:27.880235   58847 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0505 15:06:27.880305   58847 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0505 15:06:27.880358   58847 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0505 15:06:27.913997   58847 out.go:204]   - Generating certificates and keys ...
	I0505 15:06:27.914116   58847 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0505 15:06:27.914225   58847 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0505 15:06:27.914345   58847 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0505 15:06:27.914444   58847 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0505 15:06:27.914534   58847 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0505 15:06:27.914615   58847 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0505 15:06:27.914706   58847 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0505 15:06:27.914904   58847 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-724000 localhost] and IPs [192.169.0.78 127.0.0.1 ::1]
	I0505 15:06:27.915009   58847 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0505 15:06:27.915205   58847 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-724000 localhost] and IPs [192.169.0.78 127.0.0.1 ::1]
	I0505 15:06:27.915305   58847 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0505 15:06:27.915409   58847 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0505 15:06:27.915486   58847 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0505 15:06:27.915573   58847 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0505 15:06:27.915661   58847 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0505 15:06:27.915727   58847 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0505 15:06:27.915790   58847 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0505 15:06:27.915864   58847 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0505 15:06:27.915921   58847 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0505 15:06:27.916008   58847 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0505 15:06:27.916081   58847 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0505 15:06:27.972988   58847 out.go:204]   - Booting up control plane ...
	I0505 15:06:27.973097   58847 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0505 15:06:27.995125   58847 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0505 15:06:27.995225   58847 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0505 15:06:27.995406   58847 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0505 15:06:27.995532   58847 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0505 15:06:27.995593   58847 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0505 15:06:27.995820   58847 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0505 15:06:27.995922   58847 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0505 15:06:27.996037   58847 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.720787ms
	I0505 15:06:27.996144   58847 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0505 15:06:27.996246   58847 kubeadm.go:309] [api-check] The API server is healthy after 4.502304328s
	I0505 15:06:27.996396   58847 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0505 15:06:27.996581   58847 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0505 15:06:27.996676   58847 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0505 15:06:27.996959   58847 kubeadm.go:309] [mark-control-plane] Marking the node cert-expiration-724000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0505 15:06:27.997057   58847 kubeadm.go:309] [bootstrap-token] Using token: f1fw6g.zyi0e1w97l4svblm
	I0505 15:06:28.021116   58847 out.go:204]   - Configuring RBAC rules ...
	I0505 15:06:28.021294   58847 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0505 15:06:28.021424   58847 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0505 15:06:28.021644   58847 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0505 15:06:28.021824   58847 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0505 15:06:28.022018   58847 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0505 15:06:28.022172   58847 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0505 15:06:28.022340   58847 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0505 15:06:28.022411   58847 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0505 15:06:28.022484   58847 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0505 15:06:28.022496   58847 kubeadm.go:309] 
	I0505 15:06:28.022594   58847 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0505 15:06:28.022602   58847 kubeadm.go:309] 
	I0505 15:06:28.022750   58847 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0505 15:06:28.022770   58847 kubeadm.go:309] 
	I0505 15:06:28.022808   58847 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0505 15:06:28.022884   58847 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0505 15:06:28.022963   58847 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0505 15:06:28.022969   58847 kubeadm.go:309] 
	I0505 15:06:28.023053   58847 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0505 15:06:28.023062   58847 kubeadm.go:309] 
	I0505 15:06:28.023128   58847 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0505 15:06:28.023133   58847 kubeadm.go:309] 
	I0505 15:06:28.023239   58847 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0505 15:06:28.023373   58847 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0505 15:06:28.023449   58847 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0505 15:06:28.023454   58847 kubeadm.go:309] 
	I0505 15:06:28.023545   58847 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0505 15:06:28.023633   58847 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0505 15:06:28.023637   58847 kubeadm.go:309] 
	I0505 15:06:28.023733   58847 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token f1fw6g.zyi0e1w97l4svblm \
	I0505 15:06:28.023845   58847 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bb322c15c0e2c5248019c1a3f3e860e3246513e220b7324b93ddb5b59ae2d57a \
	I0505 15:06:28.023872   58847 kubeadm.go:309] 	--control-plane 
	I0505 15:06:28.023878   58847 kubeadm.go:309] 
	I0505 15:06:28.023972   58847 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0505 15:06:28.023978   58847 kubeadm.go:309] 
	I0505 15:06:28.024062   58847 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token f1fw6g.zyi0e1w97l4svblm \
	I0505 15:06:28.024171   58847 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:bb322c15c0e2c5248019c1a3f3e860e3246513e220b7324b93ddb5b59ae2d57a 
	I0505 15:06:28.024178   58847 cni.go:84] Creating CNI manager for ""
	I0505 15:06:28.024187   58847 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 15:06:28.080765   58847 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0505 15:06:28.102124   58847 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0505 15:06:28.112464   58847 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0505 15:06:28.128452   58847 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0505 15:06:28.128520   58847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0505 15:06:28.128530   58847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-724000 minikube.k8s.io/updated_at=2024_05_05T15_06_28_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=182cbbc99574885c654f8e32902368a71f76ddd3 minikube.k8s.io/name=cert-expiration-724000 minikube.k8s.io/primary=true
	I0505 15:06:28.137772   58847 ops.go:34] apiserver oom_adj: -16
	I0505 15:06:28.303836   58847 kubeadm.go:1107] duration metric: took 175.378338ms to wait for elevateKubeSystemPrivileges
	W0505 15:06:28.318109   58847 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0505 15:06:28.318117   58847 kubeadm.go:393] duration metric: took 10.461396081s to StartCluster
	I0505 15:06:28.318129   58847 settings.go:142] acquiring lock: {Name:mk42961bbb846d74d4f3eb396c3a07b16222feb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 15:06:28.318221   58847 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 15:06:28.318930   58847 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/kubeconfig: {Name:mk07bec02cc3957a2a8800c4412eef88581455ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 15:06:28.319158   58847 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0505 15:06:28.319177   58847 start.go:234] Will wait 6m0s for node &{Name: IP:192.169.0.78 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0505 15:06:28.341437   58847 out.go:177] * Verifying Kubernetes components...
	I0505 15:06:28.319200   58847 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0505 15:06:28.319383   58847 config.go:182] Loaded profile config "cert-expiration-724000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 15:06:28.341476   58847 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-724000"
	I0505 15:06:28.341465   58847 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-724000"
	I0505 15:06:28.382862   58847 addons.go:234] Setting addon storage-provisioner=true in "cert-expiration-724000"
	I0505 15:06:28.382890   58847 host.go:66] Checking if "cert-expiration-724000" exists ...
	I0505 15:06:28.382895   58847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0505 15:06:28.382902   58847 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-724000"
	I0505 15:06:28.383144   58847 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 15:06:28.383159   58847 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 15:06:28.383163   58847 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 15:06:28.383176   58847 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 15:06:28.392542   58847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60695
	I0505 15:06:28.393042   58847 main.go:141] libmachine: () Calling .GetVersion
	I0505 15:06:28.393131   58847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60697
	I0505 15:06:28.393405   58847 main.go:141] libmachine: Using API Version  1
	I0505 15:06:28.393411   58847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 15:06:28.393495   58847 main.go:141] libmachine: () Calling .GetVersion
	I0505 15:06:28.393644   58847 main.go:141] libmachine: () Calling .GetMachineName
	I0505 15:06:28.393825   58847 main.go:141] libmachine: Using API Version  1
	I0505 15:06:28.393833   58847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 15:06:28.393841   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetState
	I0505 15:06:28.394000   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 15:06:28.394032   58847 main.go:141] libmachine: () Calling .GetMachineName
	I0505 15:06:28.394101   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | hyperkit pid from json: 58857
	I0505 15:06:28.394382   58847 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 15:06:28.394404   58847 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 15:06:28.397248   58847 addons.go:234] Setting addon default-storageclass=true in "cert-expiration-724000"
	I0505 15:06:28.397271   58847 host.go:66] Checking if "cert-expiration-724000" exists ...
	I0505 15:06:28.397513   58847 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 15:06:28.397538   58847 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 15:06:28.403459   58847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60699
	I0505 15:06:28.403814   58847 main.go:141] libmachine: () Calling .GetVersion
	I0505 15:06:28.404153   58847 main.go:141] libmachine: Using API Version  1
	I0505 15:06:28.404164   58847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 15:06:28.404370   58847 main.go:141] libmachine: () Calling .GetMachineName
	I0505 15:06:28.404500   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetState
	I0505 15:06:28.404577   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 15:06:28.404659   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | hyperkit pid from json: 58857
	I0505 15:06:28.405645   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .DriverName
	I0505 15:06:28.426096   58847 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0505 15:06:28.406058   58847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60701
	I0505 15:06:28.467109   58847 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 15:06:28.467118   58847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0505 15:06:28.467136   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHHostname
	I0505 15:06:28.467287   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHPort
	I0505 15:06:28.467397   58847 main.go:141] libmachine: () Calling .GetVersion
	I0505 15:06:28.467419   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:28.467540   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHUsername
	I0505 15:06:28.467652   58847 sshutil.go:53] new ssh client: &{IP:192.169.0.78 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/id_rsa Username:docker}
	I0505 15:06:28.467789   58847 main.go:141] libmachine: Using API Version  1
	I0505 15:06:28.467800   58847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 15:06:28.468023   58847 main.go:141] libmachine: () Calling .GetMachineName
	I0505 15:06:28.468416   58847 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 15:06:28.468445   58847 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 15:06:28.477305   58847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60704
	I0505 15:06:28.477649   58847 main.go:141] libmachine: () Calling .GetVersion
	I0505 15:06:28.477977   58847 main.go:141] libmachine: Using API Version  1
	I0505 15:06:28.477985   58847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 15:06:28.478197   58847 main.go:141] libmachine: () Calling .GetMachineName
	I0505 15:06:28.478306   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetState
	I0505 15:06:28.478393   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 15:06:28.478481   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | hyperkit pid from json: 58857
	I0505 15:06:28.479488   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .DriverName
	I0505 15:06:28.479667   58847 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0505 15:06:28.479671   58847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0505 15:06:28.479681   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHHostname
	I0505 15:06:28.479781   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHPort
	I0505 15:06:28.479864   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHKeyPath
	I0505 15:06:28.479960   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .GetSSHUsername
	I0505 15:06:28.480044   58847 sshutil.go:53] new ssh client: &{IP:192.169.0.78 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/cert-expiration-724000/id_rsa Username:docker}
	I0505 15:06:28.504428   58847 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0505 15:06:28.536733   58847 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0505 15:06:28.612401   58847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0505 15:06:28.620352   58847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0505 15:06:28.826355   58847 start.go:946] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0505 15:06:28.826448   58847 main.go:141] libmachine: Making call to close driver server
	I0505 15:06:28.826454   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .Close
	I0505 15:06:28.826646   58847 main.go:141] libmachine: Successfully made call to close driver server
	I0505 15:06:28.826653   58847 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 15:06:28.826656   58847 main.go:141] libmachine: Making call to close driver server
	I0505 15:06:28.826657   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Closing plugin on server side
	I0505 15:06:28.826659   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .Close
	I0505 15:06:28.826818   58847 main.go:141] libmachine: Successfully made call to close driver server
	I0505 15:06:28.826831   58847 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 15:06:28.826856   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Closing plugin on server side
	I0505 15:06:28.827222   58847 api_server.go:52] waiting for apiserver process to appear ...
	I0505 15:06:28.827265   58847 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 15:06:28.839020   58847 main.go:141] libmachine: Making call to close driver server
	I0505 15:06:28.839028   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .Close
	I0505 15:06:28.839191   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Closing plugin on server side
	I0505 15:06:28.839209   58847 main.go:141] libmachine: Successfully made call to close driver server
	I0505 15:06:28.839217   58847 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 15:06:29.001276   58847 api_server.go:72] duration metric: took 682.081544ms to wait for apiserver process to appear ...
	I0505 15:06:29.001286   58847 api_server.go:88] waiting for apiserver healthz status ...
	I0505 15:06:29.001303   58847 api_server.go:253] Checking apiserver healthz at https://192.169.0.78:8443/healthz ...
	I0505 15:06:29.001400   58847 main.go:141] libmachine: Making call to close driver server
	I0505 15:06:29.001428   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .Close
	I0505 15:06:29.001586   58847 main.go:141] libmachine: Successfully made call to close driver server
	I0505 15:06:29.001595   58847 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 15:06:29.001599   58847 main.go:141] libmachine: Making call to close driver server
	I0505 15:06:29.001602   58847 main.go:141] libmachine: (cert-expiration-724000) Calling .Close
	I0505 15:06:29.001608   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Closing plugin on server side
	I0505 15:06:29.001751   58847 main.go:141] libmachine: (cert-expiration-724000) DBG | Closing plugin on server side
	I0505 15:06:29.001776   58847 main.go:141] libmachine: Successfully made call to close driver server
	I0505 15:06:29.001796   58847 main.go:141] libmachine: Making call to close connection to plugin binary
	I0505 15:06:29.028701   58847 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0505 15:06:29.004433   58847 api_server.go:279] https://192.169.0.78:8443/healthz returned 200:
	ok
	I0505 15:06:29.070118   58847 addons.go:510] duration metric: took 750.926691ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0505 15:06:29.071752   58847 api_server.go:141] control plane version: v1.30.0
	I0505 15:06:29.071768   58847 api_server.go:131] duration metric: took 70.477242ms to wait for apiserver health ...
	I0505 15:06:29.071792   58847 system_pods.go:43] waiting for kube-system pods to appear ...
	I0505 15:06:29.076573   58847 system_pods.go:59] 5 kube-system pods found
	I0505 15:06:29.076587   58847 system_pods.go:61] "etcd-cert-expiration-724000" [0848c05b-b628-4161-ad3f-023c79f70373] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0505 15:06:29.076598   58847 system_pods.go:61] "kube-apiserver-cert-expiration-724000" [256f7e82-9e44-45d3-930b-d210c7997845] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0505 15:06:29.076602   58847 system_pods.go:61] "kube-controller-manager-cert-expiration-724000" [15a57c6b-109d-4e62-9f30-d085b173d47f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0505 15:06:29.076606   58847 system_pods.go:61] "kube-scheduler-cert-expiration-724000" [2d4b7aba-e126-4ee8-8125-fba0279965d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0505 15:06:29.076608   58847 system_pods.go:61] "storage-provisioner" [33df06c7-3e81-4c61-b35b-6febd1d414ad] Pending
	I0505 15:06:29.076611   58847 system_pods.go:74] duration metric: took 4.8158ms to wait for pod list to return data ...
	I0505 15:06:29.076617   58847 kubeadm.go:576] duration metric: took 757.434233ms to wait for: map[apiserver:true system_pods:true]
	I0505 15:06:29.076626   58847 node_conditions.go:102] verifying NodePressure condition ...
	I0505 15:06:29.081075   58847 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0505 15:06:29.081085   58847 node_conditions.go:123] node cpu capacity is 2
	I0505 15:06:29.081093   58847 node_conditions.go:105] duration metric: took 4.464979ms to run NodePressure ...
	I0505 15:06:29.081099   58847 start.go:240] waiting for startup goroutines ...
	I0505 15:06:29.329791   58847 kapi.go:248] "coredns" deployment in "kube-system" namespace and "cert-expiration-724000" context rescaled to 1 replicas
	I0505 15:06:29.329806   58847 start.go:245] waiting for cluster config update ...
	I0505 15:06:29.329817   58847 start.go:254] writing updated cluster config ...
	I0505 15:06:29.330155   58847 ssh_runner.go:195] Run: rm -f paused
	I0505 15:06:29.370148   58847 start.go:600] kubectl: 1.29.2, cluster: 1.30.0 (minor skew: 1)
	I0505 15:06:29.413962   58847 out.go:177] * Done! kubectl is now configured to use "cert-expiration-724000" cluster and "default" namespace by default
	I0505 15:06:48.816017   58814 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.207942029s)
	I0505 15:06:48.816090   58814 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0505 15:06:48.866560   58814 out.go:177] 
	W0505 15:06:48.888074   58814 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	May 05 22:02:20 pause-645000 systemd[1]: Starting Docker Application Container Engine...
	May 05 22:02:20 pause-645000 dockerd[529]: time="2024-05-05T22:02:20.363626739Z" level=info msg="Starting up"
	May 05 22:02:20 pause-645000 dockerd[529]: time="2024-05-05T22:02:20.364091828Z" level=info msg="containerd not running, starting managed containerd"
	May 05 22:02:20 pause-645000 dockerd[529]: time="2024-05-05T22:02:20.364761996Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=538
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.384020529Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.397540571Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.397605043Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.397667825Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.397703216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.397779520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.397875075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.398023069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.398068231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.398099787Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.398128474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.398212970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.398386313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.399920190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.399971136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.400109694Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.400152898Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.400274644Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.400343933Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.400377825Z" level=info msg="metadata content store policy set" policy=shared
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.444629703Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.444719350Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.444882553Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.444935653Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.445019189Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.445162018Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.445676001Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.445836736Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.445882147Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.445967502Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446005281Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446087839Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446133454Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446170385Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446251921Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446289634Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446730917Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446844525Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446893778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.446926968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447008018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447050490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447085883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447162789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447203774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447234734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447311423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447355601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447387242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447463183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447508551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447542247Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447624977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447666691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447696787Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447818875Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447861491Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.447960601Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448001863Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448119447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448160738Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448250517Z" level=info msg="NRI interface is disabled by configuration."
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448516552Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448605593Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448726553Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 05 22:02:20 pause-645000 dockerd[538]: time="2024-05-05T22:02:20.448835781Z" level=info msg="containerd successfully booted in 0.066498s"
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.398416761Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.406140571Z" level=info msg="Loading containers: start."
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.524878173Z" level=info msg="Loading containers: done."
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.535589443Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.535715946Z" level=info msg="Daemon has completed initialization"
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.557155610Z" level=info msg="API listen on /var/run/docker.sock"
	May 05 22:02:21 pause-645000 dockerd[529]: time="2024-05-05T22:02:21.557240778Z" level=info msg="API listen on [::]:2376"
	May 05 22:02:21 pause-645000 systemd[1]: Started Docker Application Container Engine.
	May 05 22:02:22 pause-645000 dockerd[529]: time="2024-05-05T22:02:22.527362017Z" level=info msg="Processing signal 'terminated'"
	May 05 22:02:22 pause-645000 dockerd[529]: time="2024-05-05T22:02:22.528496546Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 05 22:02:22 pause-645000 dockerd[529]: time="2024-05-05T22:02:22.528775088Z" level=info msg="Daemon shutdown complete"
	May 05 22:02:22 pause-645000 dockerd[529]: time="2024-05-05T22:02:22.528805123Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 05 22:02:22 pause-645000 dockerd[529]: time="2024-05-05T22:02:22.528818433Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 05 22:02:22 pause-645000 systemd[1]: Stopping Docker Application Container Engine...
	May 05 22:02:23 pause-645000 systemd[1]: docker.service: Deactivated successfully.
	May 05 22:02:23 pause-645000 systemd[1]: Stopped Docker Application Container Engine.
	May 05 22:02:23 pause-645000 systemd[1]: Starting Docker Application Container Engine...
	May 05 22:02:23 pause-645000 dockerd[792]: time="2024-05-05T22:02:23.578745179Z" level=info msg="Starting up"
	May 05 22:02:23 pause-645000 dockerd[792]: time="2024-05-05T22:02:23.579376599Z" level=info msg="containerd not running, starting managed containerd"
	May 05 22:02:23 pause-645000 dockerd[792]: time="2024-05-05T22:02:23.579925846Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=798
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.599118343Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613605729Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613667649Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613714779Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613778603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613834916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613867679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.613988096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.614026339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.614056151Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.614084973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.614121618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.614221653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616326370Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616384222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616520418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616566163Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616606113Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616643075Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616673617Z" level=info msg="metadata content store policy set" policy=shared
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616845013Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616899253Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616934642Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.616969174Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617006078Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617063162Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617263909Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617337847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617374147Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617409485Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617442442Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617481236Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617516776Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617548406Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617585906Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617619260Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617659592Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617705591Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617778181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617824706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617860232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617897588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617929593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617960413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.617990602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618021790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618053425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618092754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618127877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618158885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618189785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618229274Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618273536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618306963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618337496Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618412842Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618457088Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618489867Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618518723Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618584489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618625800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618656291Z" level=info msg="NRI interface is disabled by configuration."
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618906091Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.618998507Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.619059845Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 05 22:02:23 pause-645000 dockerd[798]: time="2024-05-05T22:02:23.619098953Z" level=info msg="containerd successfully booted in 0.020696s"
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.620397385Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.630600379Z" level=info msg="Loading containers: start."
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.731990181Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.771218862Z" level=info msg="Loading containers: done."
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.778787006Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.778874190Z" level=info msg="Daemon has completed initialization"
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.795601035Z" level=info msg="API listen on /var/run/docker.sock"
	May 05 22:02:24 pause-645000 systemd[1]: Started Docker Application Container Engine.
	May 05 22:02:24 pause-645000 dockerd[792]: time="2024-05-05T22:02:24.795843198Z" level=info msg="API listen on [::]:2376"
	May 05 22:04:26 pause-645000 systemd[1]: Stopping Docker Application Container Engine...
	May 05 22:04:26 pause-645000 dockerd[792]: time="2024-05-05T22:04:26.770630419Z" level=info msg="Processing signal 'terminated'"
	May 05 22:04:26 pause-645000 dockerd[792]: time="2024-05-05T22:04:26.771718504Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 05 22:04:26 pause-645000 dockerd[792]: time="2024-05-05T22:04:26.772256290Z" level=info msg="Daemon shutdown complete"
	May 05 22:04:26 pause-645000 dockerd[792]: time="2024-05-05T22:04:26.772299622Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 05 22:04:26 pause-645000 dockerd[792]: time="2024-05-05T22:04:26.772335193Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 05 22:04:27 pause-645000 systemd[1]: docker.service: Deactivated successfully.
	May 05 22:04:27 pause-645000 systemd[1]: Stopped Docker Application Container Engine.
	May 05 22:04:27 pause-645000 systemd[1]: Starting Docker Application Container Engine...
	May 05 22:04:27 pause-645000 dockerd[1110]: time="2024-05-05T22:04:27.830406410Z" level=info msg="Starting up"
	May 05 22:04:27 pause-645000 dockerd[1110]: time="2024-05-05T22:04:27.831086390Z" level=info msg="containerd not running, starting managed containerd"
	May 05 22:04:27 pause-645000 dockerd[1110]: time="2024-05-05T22:04:27.831693770Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1116
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.848641622Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863626041Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863671566Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863699235Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863708712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863727802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863758601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863869102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863904989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863916496Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863923037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.863938722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.864020484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865599288Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865639177Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865731864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865767548Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865785377Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865797336Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865805263Z" level=info msg="metadata content store policy set" policy=shared
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865936899Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.865986043Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866000571Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866019013Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866031385Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866061570Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866216566Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866284739Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866298285Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866307231Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866315736Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866324028Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866331863Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866344220Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866353675Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866361543Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866372054Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866387053Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866405250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866422316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866433788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866442904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866450488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866459088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866466532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866474955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866485542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866494979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866502166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866509752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866519466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866530829Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866543892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866560549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866571171Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866598589Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866631721Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866644061Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866653729Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866722356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866733559Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866740572Z" level=info msg="NRI interface is disabled by configuration."
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866863957Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866919541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866949147Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	May 05 22:04:27 pause-645000 dockerd[1116]: time="2024-05-05T22:04:27.866963090Z" level=info msg="containerd successfully booted in 0.018941s"
	May 05 22:04:28 pause-645000 dockerd[1110]: time="2024-05-05T22:04:28.862930247Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	May 05 22:04:28 pause-645000 dockerd[1110]: time="2024-05-05T22:04:28.897827673Z" level=info msg="Loading containers: start."
	May 05 22:04:29 pause-645000 dockerd[1110]: time="2024-05-05T22:04:29.000940016Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	May 05 22:04:29 pause-645000 dockerd[1110]: time="2024-05-05T22:04:29.046836611Z" level=info msg="Loading containers: done."
	May 05 22:04:29 pause-645000 dockerd[1110]: time="2024-05-05T22:04:29.058848385Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	May 05 22:04:29 pause-645000 dockerd[1110]: time="2024-05-05T22:04:29.059056322Z" level=info msg="Daemon has completed initialization"
	May 05 22:04:29 pause-645000 dockerd[1110]: time="2024-05-05T22:04:29.091875642Z" level=info msg="API listen on /var/run/docker.sock"
	May 05 22:04:29 pause-645000 dockerd[1110]: time="2024-05-05T22:04:29.092045005Z" level=info msg="API listen on [::]:2376"
	May 05 22:04:29 pause-645000 systemd[1]: Started Docker Application Container Engine.
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010851574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010910340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010922426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010986207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010812601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010903491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010913454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.010976004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.062379694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.062653120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.062726257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.062883580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.067080693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.067236424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.067373533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.067561449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.193843683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.194413365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.198893407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.199112992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.243909357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.244051841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.244079411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.244165041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.272128853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.272508363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.272594272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.272836137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.282047465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.285848940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.285993883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:35 pause-645000 dockerd[1116]: time="2024-05-05T22:04:35.286157706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.255612977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.255690579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.255704073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.255886366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.267254007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.267303052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.267315045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.267382645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.268948418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.269024549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.269039404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.269109778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.447229918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.447355916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.447431412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.447595902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.744244095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.744434127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.744528482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.745146443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.774711213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.775321916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.775438731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:04:54 pause-645000 dockerd[1116]: time="2024-05-05T22:04:54.775627095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 05 22:05:04 pause-645000 dockerd[1110]: time="2024-05-05T22:05:04.940523205Z" level=info msg="ignoring event" container=77aad314dd0d47e395d4814bb10527943dea2d28499c37aa5c70a2e620465e8c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:04 pause-645000 dockerd[1116]: time="2024-05-05T22:05:04.940955922Z" level=info msg="shim disconnected" id=77aad314dd0d47e395d4814bb10527943dea2d28499c37aa5c70a2e620465e8c namespace=moby
	May 05 22:05:04 pause-645000 dockerd[1116]: time="2024-05-05T22:05:04.941308864Z" level=warning msg="cleaning up after shim disconnected" id=77aad314dd0d47e395d4814bb10527943dea2d28499c37aa5c70a2e620465e8c namespace=moby
	May 05 22:05:04 pause-645000 dockerd[1116]: time="2024-05-05T22:05:04.941351588Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:05 pause-645000 dockerd[1110]: time="2024-05-05T22:05:05.018497838Z" level=info msg="ignoring event" container=34f5423bebeed2af72eab344742df566a251ff88a9bd6611cb57929af45f3ac1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:05 pause-645000 dockerd[1116]: time="2024-05-05T22:05:05.018615326Z" level=info msg="shim disconnected" id=34f5423bebeed2af72eab344742df566a251ff88a9bd6611cb57929af45f3ac1 namespace=moby
	May 05 22:05:05 pause-645000 dockerd[1116]: time="2024-05-05T22:05:05.018668781Z" level=warning msg="cleaning up after shim disconnected" id=34f5423bebeed2af72eab344742df566a251ff88a9bd6611cb57929af45f3ac1 namespace=moby
	May 05 22:05:05 pause-645000 dockerd[1116]: time="2024-05-05T22:05:05.018677334Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1110]: time="2024-05-05T22:05:37.857797667Z" level=info msg="Processing signal 'terminated'"
	May 05 22:05:37 pause-645000 systemd[1]: Stopping Docker Application Container Engine...
	May 05 22:05:37 pause-645000 dockerd[1110]: time="2024-05-05T22:05:37.910084844Z" level=info msg="ignoring event" container=e7a49406f44ad49a3e6cfa402b9bc81637fd2739582501c0267e9168e68f3705 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.909940682Z" level=info msg="shim disconnected" id=e7a49406f44ad49a3e6cfa402b9bc81637fd2739582501c0267e9168e68f3705 namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.910575712Z" level=warning msg="cleaning up after shim disconnected" id=e7a49406f44ad49a3e6cfa402b9bc81637fd2739582501c0267e9168e68f3705 namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.910618143Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1110]: time="2024-05-05T22:05:37.918168394Z" level=info msg="ignoring event" container=6b9600e0ddc958a0663f061b4aadd352f1618bdf4be745648f71b62a76788d99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.919043972Z" level=info msg="shim disconnected" id=6b9600e0ddc958a0663f061b4aadd352f1618bdf4be745648f71b62a76788d99 namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.919095717Z" level=warning msg="cleaning up after shim disconnected" id=6b9600e0ddc958a0663f061b4aadd352f1618bdf4be745648f71b62a76788d99 namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.919105638Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1110]: time="2024-05-05T22:05:37.937586306Z" level=info msg="ignoring event" container=fdae7f7c221ad4e7c1b696038c9d574c54620e3c7e825aee98195d7ae2c950e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.937925946Z" level=info msg="shim disconnected" id=fdae7f7c221ad4e7c1b696038c9d574c54620e3c7e825aee98195d7ae2c950e8 namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.937978401Z" level=warning msg="cleaning up after shim disconnected" id=fdae7f7c221ad4e7c1b696038c9d574c54620e3c7e825aee98195d7ae2c950e8 namespace=moby
	May 05 22:05:37 pause-645000 dockerd[1116]: time="2024-05-05T22:05:37.937987855Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.001904538Z" level=info msg="ignoring event" container=0a58cfaac1b1f9bfb67658f76b5f5cdb5977a23bbccae015f5b4f7379deffce4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.002397918Z" level=info msg="shim disconnected" id=0a58cfaac1b1f9bfb67658f76b5f5cdb5977a23bbccae015f5b4f7379deffce4 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.002502025Z" level=warning msg="cleaning up after shim disconnected" id=0a58cfaac1b1f9bfb67658f76b5f5cdb5977a23bbccae015f5b4f7379deffce4 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.002535269Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.038301522Z" level=info msg="shim disconnected" id=d50de234386686f01deada134115847c646b16be2634a86ff9c1f044ceec8ff7 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.038422169Z" level=warning msg="cleaning up after shim disconnected" id=d50de234386686f01deada134115847c646b16be2634a86ff9c1f044ceec8ff7 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.038467263Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.042174084Z" level=info msg="ignoring event" container=d50de234386686f01deada134115847c646b16be2634a86ff9c1f044ceec8ff7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.042929889Z" level=info msg="shim disconnected" id=a68a1d5c7bbcd580a8c9b137d7f986e0fdc5b2d047351166baac98684298fc11 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.043006374Z" level=warning msg="cleaning up after shim disconnected" id=a68a1d5c7bbcd580a8c9b137d7f986e0fdc5b2d047351166baac98684298fc11 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.043015840Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.044857263Z" level=info msg="ignoring event" container=a68a1d5c7bbcd580a8c9b137d7f986e0fdc5b2d047351166baac98684298fc11 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.045909183Z" level=info msg="ignoring event" container=c8d70a309dc6f3dfcda307d8ef92738b73a9ddcfa42f662e942c00c4481f95c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.046399023Z" level=info msg="shim disconnected" id=c8d70a309dc6f3dfcda307d8ef92738b73a9ddcfa42f662e942c00c4481f95c3 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.046449147Z" level=warning msg="cleaning up after shim disconnected" id=c8d70a309dc6f3dfcda307d8ef92738b73a9ddcfa42f662e942c00c4481f95c3 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.046458601Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.047947573Z" level=info msg="ignoring event" container=835ce6befab749b39e112b59463304e7cefa1365d8699ad58427c3a62ad90228 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.048506864Z" level=info msg="shim disconnected" id=835ce6befab749b39e112b59463304e7cefa1365d8699ad58427c3a62ad90228 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.048637283Z" level=warning msg="cleaning up after shim disconnected" id=835ce6befab749b39e112b59463304e7cefa1365d8699ad58427c3a62ad90228 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.048681868Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.060093311Z" level=info msg="ignoring event" container=0322cf6c173f3331ce85b637d9762cbc021bb618478a7a9e8177b70c2b5a7384 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1110]: time="2024-05-05T22:05:38.061060116Z" level=info msg="ignoring event" container=6153dd05cf3658796650593a5bad8cfdbeb26a899b7587af4731dd0fb29b0fd5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.062027102Z" level=info msg="shim disconnected" id=6153dd05cf3658796650593a5bad8cfdbeb26a899b7587af4731dd0fb29b0fd5 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.062123279Z" level=warning msg="cleaning up after shim disconnected" id=6153dd05cf3658796650593a5bad8cfdbeb26a899b7587af4731dd0fb29b0fd5 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.062133113Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.063205216Z" level=info msg="shim disconnected" id=0322cf6c173f3331ce85b637d9762cbc021bb618478a7a9e8177b70c2b5a7384 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.063297549Z" level=warning msg="cleaning up after shim disconnected" id=0322cf6c173f3331ce85b637d9762cbc021bb618478a7a9e8177b70c2b5a7384 namespace=moby
	May 05 22:05:38 pause-645000 dockerd[1116]: time="2024-05-05T22:05:38.063309660Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:42 pause-645000 dockerd[1110]: time="2024-05-05T22:05:42.900020066Z" level=info msg="ignoring event" container=533a1d124c2cd6a277f5059f0d0466e11c179d3de29f2577e620e6e93ae259f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:42 pause-645000 dockerd[1116]: time="2024-05-05T22:05:42.899915356Z" level=info msg="shim disconnected" id=533a1d124c2cd6a277f5059f0d0466e11c179d3de29f2577e620e6e93ae259f7 namespace=moby
	May 05 22:05:42 pause-645000 dockerd[1116]: time="2024-05-05T22:05:42.900164493Z" level=warning msg="cleaning up after shim disconnected" id=533a1d124c2cd6a277f5059f0d0466e11c179d3de29f2577e620e6e93ae259f7 namespace=moby
	May 05 22:05:42 pause-645000 dockerd[1116]: time="2024-05-05T22:05:42.900173728Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:47 pause-645000 dockerd[1110]: time="2024-05-05T22:05:47.951615452Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=fc1c179f091d4a9fbbcdcf10285455d2134c99be7145906ff5dda6f4222d6611
	May 05 22:05:47 pause-645000 dockerd[1110]: time="2024-05-05T22:05:47.973609005Z" level=info msg="ignoring event" container=fc1c179f091d4a9fbbcdcf10285455d2134c99be7145906ff5dda6f4222d6611 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 05 22:05:47 pause-645000 dockerd[1116]: time="2024-05-05T22:05:47.974154290Z" level=info msg="shim disconnected" id=fc1c179f091d4a9fbbcdcf10285455d2134c99be7145906ff5dda6f4222d6611 namespace=moby
	May 05 22:05:47 pause-645000 dockerd[1116]: time="2024-05-05T22:05:47.974222343Z" level=warning msg="cleaning up after shim disconnected" id=fc1c179f091d4a9fbbcdcf10285455d2134c99be7145906ff5dda6f4222d6611 namespace=moby
	May 05 22:05:47 pause-645000 dockerd[1116]: time="2024-05-05T22:05:47.974231664Z" level=info msg="cleaning up dead shim" namespace=moby
	May 05 22:05:47 pause-645000 dockerd[1110]: time="2024-05-05T22:05:47.997856465Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	May 05 22:05:47 pause-645000 dockerd[1110]: time="2024-05-05T22:05:47.998074237Z" level=info msg="Daemon shutdown complete"
	May 05 22:05:47 pause-645000 dockerd[1110]: time="2024-05-05T22:05:47.998160356Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	May 05 22:05:47 pause-645000 dockerd[1110]: time="2024-05-05T22:05:47.998161134Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	May 05 22:05:49 pause-645000 systemd[1]: docker.service: Deactivated successfully.
	May 05 22:05:49 pause-645000 systemd[1]: Stopped Docker Application Container Engine.
	May 05 22:05:49 pause-645000 systemd[1]: docker.service: Consumed 2.423s CPU time.
	May 05 22:05:49 pause-645000 systemd[1]: Starting Docker Application Container Engine...
	May 05 22:05:49 pause-645000 dockerd[3389]: time="2024-05-05T22:05:49.044774089Z" level=info msg="Starting up"
	May 05 22:06:49 pause-645000 dockerd[3389]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 05 22:06:49 pause-645000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 05 22:06:49 pause-645000 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 05 22:06:49 pause-645000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0505 15:06:48.888650   58814 out.go:239] * 
	W0505 15:06:48.889776   58814 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0505 15:06:48.951976   58814 out.go:177] 
	
	
	==> Docker <==
	May 05 22:06:49 pause-645000 cri-dockerd[1018]: time="2024-05-05T22:06:49Z" level=error msg="Set backoffDuration to : 1m0s for container ID '533a1d124c2cd6a277f5059f0d0466e11c179d3de29f2577e620e6e93ae259f7'"
	May 05 22:06:49 pause-645000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	May 05 22:06:49 pause-645000 systemd[1]: Stopped Docker Application Container Engine.
	May 05 22:06:49 pause-645000 systemd[1]: Starting Docker Application Container Engine...
	May 05 22:06:49 pause-645000 dockerd[3592]: time="2024-05-05T22:06:49.292379451Z" level=info msg="Starting up"
	May 05 22:07:49 pause-645000 dockerd[3592]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	May 05 22:07:49 pause-645000 cri-dockerd[1018]: time="2024-05-05T22:07:49Z" level=error msg="error getting RW layer size for container ID '0322cf6c173f3331ce85b637d9762cbc021bb618478a7a9e8177b70c2b5a7384': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/0322cf6c173f3331ce85b637d9762cbc021bb618478a7a9e8177b70c2b5a7384/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 05 22:07:49 pause-645000 cri-dockerd[1018]: time="2024-05-05T22:07:49Z" level=error msg="Set backoffDuration to : 1m0s for container ID '0322cf6c173f3331ce85b637d9762cbc021bb618478a7a9e8177b70c2b5a7384'"
	May 05 22:07:49 pause-645000 cri-dockerd[1018]: time="2024-05-05T22:07:49Z" level=error msg="error getting RW layer size for container ID '0a58cfaac1b1f9bfb67658f76b5f5cdb5977a23bbccae015f5b4f7379deffce4': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/0a58cfaac1b1f9bfb67658f76b5f5cdb5977a23bbccae015f5b4f7379deffce4/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 05 22:07:49 pause-645000 cri-dockerd[1018]: time="2024-05-05T22:07:49Z" level=error msg="Set backoffDuration to : 1m0s for container ID '0a58cfaac1b1f9bfb67658f76b5f5cdb5977a23bbccae015f5b4f7379deffce4'"
	May 05 22:07:49 pause-645000 cri-dockerd[1018]: time="2024-05-05T22:07:49Z" level=error msg="error getting RW layer size for container ID '533a1d124c2cd6a277f5059f0d0466e11c179d3de29f2577e620e6e93ae259f7': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/533a1d124c2cd6a277f5059f0d0466e11c179d3de29f2577e620e6e93ae259f7/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 05 22:07:49 pause-645000 cri-dockerd[1018]: time="2024-05-05T22:07:49Z" level=error msg="Set backoffDuration to : 1m0s for container ID '533a1d124c2cd6a277f5059f0d0466e11c179d3de29f2577e620e6e93ae259f7'"
	May 05 22:07:49 pause-645000 cri-dockerd[1018]: time="2024-05-05T22:07:49Z" level=error msg="error getting RW layer size for container ID 'fc1c179f091d4a9fbbcdcf10285455d2134c99be7145906ff5dda6f4222d6611': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/fc1c179f091d4a9fbbcdcf10285455d2134c99be7145906ff5dda6f4222d6611/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 05 22:07:49 pause-645000 cri-dockerd[1018]: time="2024-05-05T22:07:49Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'fc1c179f091d4a9fbbcdcf10285455d2134c99be7145906ff5dda6f4222d6611'"
	May 05 22:07:49 pause-645000 cri-dockerd[1018]: time="2024-05-05T22:07:49Z" level=error msg="error getting RW layer size for container ID '6153dd05cf3658796650593a5bad8cfdbeb26a899b7587af4731dd0fb29b0fd5': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/6153dd05cf3658796650593a5bad8cfdbeb26a899b7587af4731dd0fb29b0fd5/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 05 22:07:49 pause-645000 cri-dockerd[1018]: time="2024-05-05T22:07:49Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6153dd05cf3658796650593a5bad8cfdbeb26a899b7587af4731dd0fb29b0fd5'"
	May 05 22:07:49 pause-645000 cri-dockerd[1018]: time="2024-05-05T22:07:49Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	May 05 22:07:49 pause-645000 cri-dockerd[1018]: time="2024-05-05T22:07:49Z" level=error msg="error getting RW layer size for container ID 'e7a49406f44ad49a3e6cfa402b9bc81637fd2739582501c0267e9168e68f3705': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/e7a49406f44ad49a3e6cfa402b9bc81637fd2739582501c0267e9168e68f3705/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 05 22:07:49 pause-645000 cri-dockerd[1018]: time="2024-05-05T22:07:49Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'e7a49406f44ad49a3e6cfa402b9bc81637fd2739582501c0267e9168e68f3705'"
	May 05 22:07:49 pause-645000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	May 05 22:07:49 pause-645000 systemd[1]: docker.service: Failed with result 'exit-code'.
	May 05 22:07:49 pause-645000 systemd[1]: Failed to start Docker Application Container Engine.
	May 05 22:07:49 pause-645000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	May 05 22:07:49 pause-645000 systemd[1]: Stopped Docker Application Container Engine.
	May 05 22:07:49 pause-645000 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-05-05T22:07:51Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.097565] systemd-fstab-generator[516]: Ignoring "noauto" option for root device
	[  +1.752898] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[  +0.255329] systemd-fstab-generator[758]: Ignoring "noauto" option for root device
	[  +0.108864] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +0.110607] systemd-fstab-generator[784]: Ignoring "noauto" option for root device
	[May 5 22:04] systemd-fstab-generator[971]: Ignoring "noauto" option for root device
	[  +0.055825] kauditd_printk_skb: 186 callbacks suppressed
	[  +0.048580] systemd-fstab-generator[983]: Ignoring "noauto" option for root device
	[  +0.109650] systemd-fstab-generator[995]: Ignoring "noauto" option for root device
	[  +0.123435] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[  +3.766290] systemd-fstab-generator[1102]: Ignoring "noauto" option for root device
	[  +2.217102] kauditd_printk_skb: 100 callbacks suppressed
	[  +0.352927] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +4.751124] systemd-fstab-generator[1491]: Ignoring "noauto" option for root device
	[  +0.055728] kauditd_printk_skb: 51 callbacks suppressed
	[  +4.973023] systemd-fstab-generator[1893]: Ignoring "noauto" option for root device
	[  +0.073890] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.414637] systemd-fstab-generator[2125]: Ignoring "noauto" option for root device
	[  +0.087302] kauditd_printk_skb: 12 callbacks suppressed
	[May 5 22:05] kauditd_printk_skb: 90 callbacks suppressed
	[ +32.222475] systemd-fstab-generator[2954]: Ignoring "noauto" option for root device
	[  +0.297659] systemd-fstab-generator[2989]: Ignoring "noauto" option for root device
	[  +0.134442] systemd-fstab-generator[3001]: Ignoring "noauto" option for root device
	[  +0.165932] systemd-fstab-generator[3015]: Ignoring "noauto" option for root device
	[  +5.153712] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 22:08:49 up 6 min,  0 users,  load average: 0.01, 0.07, 0.02
	Linux pause-645000 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	May 05 22:08:44 pause-645000 kubelet[1900]: E0505 22:08:44.339976    1900 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"pause-645000\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-645000?timeout=10s\": dial tcp 192.169.0.73:8443: connect: connection refused"
	May 05 22:08:44 pause-645000 kubelet[1900]: E0505 22:08:44.340451    1900 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"pause-645000\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-645000?timeout=10s\": dial tcp 192.169.0.73:8443: connect: connection refused"
	May 05 22:08:44 pause-645000 kubelet[1900]: E0505 22:08:44.340939    1900 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"pause-645000\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-645000?timeout=10s\": dial tcp 192.169.0.73:8443: connect: connection refused"
	May 05 22:08:44 pause-645000 kubelet[1900]: E0505 22:08:44.340972    1900 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	May 05 22:08:45 pause-645000 kubelet[1900]: E0505 22:08:45.417414    1900 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m7.583254012s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	May 05 22:08:48 pause-645000 kubelet[1900]: E0505 22:08:48.258115    1900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-645000?timeout=10s\": dial tcp 192.169.0.73:8443: connect: connection refused" interval="7s"
	May 05 22:08:48 pause-645000 kubelet[1900]: E0505 22:08:48.258511    1900 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.169.0.73:8443: connect: connection refused" event="&Event{ObjectMeta:{pause-645000.17ccb6f922687c52  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:pause-645000,UID:pause-645000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:ContainerGCFailed,Message:rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?,Source:EventSource{Component:kubelet,Host:pause-645000,},FirstTimestamp:2024-05-05 22:05:39.261701202 +0000 UTC m=+60.112424942,LastTimestamp:2024-05-05 22:05:39.261701202 +0000 UTC m=+60.112424942,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,Reporting
Controller:kubelet,ReportingInstance:pause-645000,}"
	May 05 22:08:49 pause-645000 kubelet[1900]: I0505 22:08:49.254688    1900 status_manager.go:853] "Failed to get status for pod" podUID="f6b983829ca009f16f7d8467dd0af75c" pod="kube-system/kube-apiserver-pause-645000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-645000\": dial tcp 192.169.0.73:8443: connect: connection refused"
	May 05 22:08:49 pause-645000 kubelet[1900]: I0505 22:08:49.255158    1900 status_manager.go:853] "Failed to get status for pod" podUID="8e9c8372-f1ec-423f-a611-997985e31509" pod="kube-system/coredns-7db6d8ff4d-r4v6r" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-r4v6r\": dial tcp 192.169.0.73:8443: connect: connection refused"
	May 05 22:08:49 pause-645000 kubelet[1900]: E0505 22:08:49.478520    1900 kubelet.go:2910] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	May 05 22:08:49 pause-645000 kubelet[1900]: E0505 22:08:49.478574    1900 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	May 05 22:08:49 pause-645000 kubelet[1900]: E0505 22:08:49.478592    1900 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 05 22:08:49 pause-645000 kubelet[1900]: E0505 22:08:49.478603    1900 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 05 22:08:49 pause-645000 kubelet[1900]: E0505 22:08:49.480155    1900 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 05 22:08:49 pause-645000 kubelet[1900]: E0505 22:08:49.480209    1900 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 05 22:08:49 pause-645000 kubelet[1900]: E0505 22:08:49.480361    1900 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	May 05 22:08:49 pause-645000 kubelet[1900]: E0505 22:08:49.480411    1900 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 05 22:08:49 pause-645000 kubelet[1900]: E0505 22:08:49.480454    1900 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	May 05 22:08:49 pause-645000 kubelet[1900]: E0505 22:08:49.480490    1900 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 05 22:08:49 pause-645000 kubelet[1900]: I0505 22:08:49.480520    1900 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 05 22:08:49 pause-645000 kubelet[1900]: E0505 22:08:49.480563    1900 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	May 05 22:08:49 pause-645000 kubelet[1900]: E0505 22:08:49.480601    1900 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	May 05 22:08:49 pause-645000 kubelet[1900]: E0505 22:08:49.481053    1900 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	May 05 22:08:49 pause-645000 kubelet[1900]: E0505 22:08:49.481107    1900 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	May 05 22:08:49 pause-645000 kubelet[1900]: E0505 22:08:49.481244    1900 kubelet.go:1435] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.44/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0505 15:07:49.046440   58880 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0505 15:07:49.058457   58880 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0505 15:07:49.070140   58880 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0505 15:07:49.080393   58880 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0505 15:07:49.091360   58880 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0505 15:07:49.101369   58880 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0505 15:07:49.112482   58880 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-645000 -n pause-645000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-645000 -n pause-645000: exit status 2 (162.607607ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-645000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (194.67s)

                                                
                                    

Test pass (312/336)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 21.2
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.39
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.37
12 TestDownloadOnly/v1.30.0/json-events 11.67
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.3
18 TestDownloadOnly/v1.30.0/DeleteAll 0.39
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.37
21 TestBinaryMirror 1.01
22 TestOffline 99.86
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.22
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.2
27 TestAddons/Setup 329.83
29 TestAddons/parallel/Registry 13.96
30 TestAddons/parallel/Ingress 19
31 TestAddons/parallel/InspektorGadget 10.49
32 TestAddons/parallel/MetricsServer 5.53
33 TestAddons/parallel/HelmTiller 9.76
35 TestAddons/parallel/CSI 60.69
36 TestAddons/parallel/Headlamp 11.93
37 TestAddons/parallel/CloudSpanner 6.42
38 TestAddons/parallel/LocalPath 53.48
39 TestAddons/parallel/NvidiaDevicePlugin 5.35
40 TestAddons/parallel/Yakd 5.01
41 TestAddons/parallel/Volcano 39.75
44 TestAddons/serial/GCPAuth/Namespaces 0.1
45 TestAddons/StoppedEnableDisable 5.95
46 TestCertOptions 159.09
47 TestCertExpiration 247.38
48 TestDockerFlags 42.55
49 TestForceSystemdFlag 40.56
50 TestForceSystemdEnv 42.79
53 TestHyperKitDriverInstallOrUpdate 8.23
56 TestErrorSpam/setup 35.03
57 TestErrorSpam/start 1.7
58 TestErrorSpam/status 0.51
59 TestErrorSpam/pause 1.34
60 TestErrorSpam/unpause 1.35
61 TestErrorSpam/stop 153.82
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 208.55
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 39.83
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.06
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.94
73 TestFunctional/serial/CacheCmd/cache/add_local 1.45
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
75 TestFunctional/serial/CacheCmd/cache/list 0.09
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.06
78 TestFunctional/serial/CacheCmd/cache/delete 0.18
79 TestFunctional/serial/MinikubeKubectlCmd 0.95
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.38
81 TestFunctional/serial/ExtraConfig 41.93
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 2.65
84 TestFunctional/serial/LogsFileCmd 2.78
85 TestFunctional/serial/InvalidService 4.22
87 TestFunctional/parallel/ConfigCmd 0.7
88 TestFunctional/parallel/DashboardCmd 12.99
89 TestFunctional/parallel/DryRun 1.02
90 TestFunctional/parallel/InternationalLanguage 0.55
91 TestFunctional/parallel/StatusCmd 0.58
95 TestFunctional/parallel/ServiceCmdConnect 13.37
96 TestFunctional/parallel/AddonsCmd 0.28
97 TestFunctional/parallel/PersistentVolumeClaim 28.4
99 TestFunctional/parallel/SSHCmd 0.33
100 TestFunctional/parallel/CpCmd 1.14
101 TestFunctional/parallel/MySQL 25.85
102 TestFunctional/parallel/FileSync 0.18
103 TestFunctional/parallel/CertSync 1.2
107 TestFunctional/parallel/NodeLabels 0.05
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.14
111 TestFunctional/parallel/License 0.6
112 TestFunctional/parallel/DockerEnv/bash 0.82
113 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
114 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.03
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 23.16
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
123 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
124 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
125 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
127 TestFunctional/parallel/ServiceCmd/DeployApp 8.11
128 TestFunctional/parallel/ServiceCmd/List 0.78
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.78
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
131 TestFunctional/parallel/ServiceCmd/Format 0.44
132 TestFunctional/parallel/ServiceCmd/URL 0.45
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
134 TestFunctional/parallel/ProfileCmd/profile_list 0.3
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
136 TestFunctional/parallel/MountCmd/any-port 6.21
137 TestFunctional/parallel/MountCmd/specific-port 1.61
138 TestFunctional/parallel/MountCmd/VerifyCleanup 2.53
139 TestFunctional/parallel/Version/short 0.15
140 TestFunctional/parallel/Version/components 0.4
141 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
142 TestFunctional/parallel/ImageCommands/ImageListTable 0.17
143 TestFunctional/parallel/ImageCommands/ImageListJson 0.17
144 TestFunctional/parallel/ImageCommands/ImageListYaml 0.17
145 TestFunctional/parallel/ImageCommands/ImageBuild 1.92
146 TestFunctional/parallel/ImageCommands/Setup 1.95
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.52
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.06
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.82
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.78
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.36
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.07
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.95
154 TestFunctional/delete_addon-resizer_images 0.12
155 TestFunctional/delete_my-image_image 0.05
156 TestFunctional/delete_minikube_cached_images 0.05
160 TestMultiControlPlane/serial/StartCluster 200.33
161 TestMultiControlPlane/serial/DeployApp 6.96
162 TestMultiControlPlane/serial/PingHostFromPods 1.38
163 TestMultiControlPlane/serial/AddWorkerNode 64.66
164 TestMultiControlPlane/serial/NodeLabels 0.06
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.39
166 TestMultiControlPlane/serial/CopyFile 9.68
167 TestMultiControlPlane/serial/StopSecondaryNode 8.72
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.3
169 TestMultiControlPlane/serial/RestartSecondaryNode 39.94
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.37
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.3
174 TestMultiControlPlane/serial/StopCluster 91.83
175 TestMultiControlPlane/serial/RestartCluster 107.07
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.29
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.35
181 TestImageBuild/serial/Setup 39.96
182 TestImageBuild/serial/NormalBuild 1.37
183 TestImageBuild/serial/BuildWithBuildArg 0.51
184 TestImageBuild/serial/BuildWithDockerIgnore 0.25
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.22
189 TestJSONOutput/start/Command 94.24
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.49
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.47
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 8.35
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.77
217 TestMainNoArgs 0.09
218 TestMinikubeProfile 89.84
221 TestMountStart/serial/StartWithMountFirst 21.5
222 TestMountStart/serial/VerifyMountFirst 0.33
223 TestMountStart/serial/StartWithMountSecond 19.12
224 TestMountStart/serial/VerifyMountSecond 0.32
225 TestMountStart/serial/DeleteFirst 2.39
226 TestMountStart/serial/VerifyMountPostDelete 0.32
227 TestMountStart/serial/Stop 2.4
228 TestMountStart/serial/RestartStopped 42.44
229 TestMountStart/serial/VerifyMountPostStop 0.32
232 TestMultiNode/serial/FreshStart2Nodes 215.66
233 TestMultiNode/serial/DeployApp2Nodes 5.75
234 TestMultiNode/serial/PingHostFrom2Pods 0.93
235 TestMultiNode/serial/AddNode 37.86
236 TestMultiNode/serial/MultiNodeLabels 0.05
237 TestMultiNode/serial/ProfileList 0.22
238 TestMultiNode/serial/CopyFile 5.64
239 TestMultiNode/serial/StopNode 2.87
240 TestMultiNode/serial/StartAfterStop 26.68
241 TestMultiNode/serial/RestartKeepsNodes 168.23
242 TestMultiNode/serial/DeleteNode 3.41
243 TestMultiNode/serial/StopMultiNode 16.8
244 TestMultiNode/serial/RestartMultiNode 72.31
245 TestMultiNode/serial/ValidateNameConflict 45.66
249 TestPreload 138.7
251 TestScheduledStopUnix 108.45
252 TestSkaffold 229.5
255 TestRunningBinaryUpgrade 80.97
257 TestKubernetesUpgrade 134.17
270 TestStoppedBinaryUpgrade/Setup 1.21
271 TestStoppedBinaryUpgrade/Upgrade 99.74
272 TestStoppedBinaryUpgrade/MinikubeLogs 2.59
281 TestPause/serial/Start 210.83
283 TestNoKubernetes/serial/StartNoK8sWithVersion 0.48
284 TestNoKubernetes/serial/StartWithK8s 42.82
285 TestNoKubernetes/serial/StartWithStopK8s 17.39
286 TestNoKubernetes/serial/Start 21.03
287 TestNoKubernetes/serial/VerifyK8sNotRunning 0.14
288 TestNoKubernetes/serial/ProfileList 0.54
289 TestNoKubernetes/serial/Stop 8.39
290 TestNoKubernetes/serial/StartNoArgs 19.33
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.14
292 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 4.38
293 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 7.15
295 TestNetworkPlugins/group/auto/Start 93.25
296 TestNetworkPlugins/group/auto/KubeletFlags 0.16
297 TestNetworkPlugins/group/auto/NetCatPod 11.15
298 TestNetworkPlugins/group/auto/DNS 0.14
299 TestNetworkPlugins/group/auto/Localhost 0.1
300 TestNetworkPlugins/group/auto/HairPin 0.1
301 TestNetworkPlugins/group/kindnet/Start 63.93
302 TestNetworkPlugins/group/calico/Start 77.37
303 TestNetworkPlugins/group/kindnet/ControllerPod 6
304 TestNetworkPlugins/group/kindnet/KubeletFlags 0.16
305 TestNetworkPlugins/group/kindnet/NetCatPod 12.14
306 TestNetworkPlugins/group/kindnet/DNS 0.14
307 TestNetworkPlugins/group/kindnet/Localhost 0.12
308 TestNetworkPlugins/group/kindnet/HairPin 0.12
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.16
311 TestNetworkPlugins/group/calico/NetCatPod 12.14
312 TestNetworkPlugins/group/custom-flannel/Start 63.34
313 TestNetworkPlugins/group/calico/DNS 0.12
314 TestNetworkPlugins/group/calico/Localhost 0.13
315 TestNetworkPlugins/group/calico/HairPin 0.11
316 TestNetworkPlugins/group/false/Start 54.32
317 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.19
318 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.15
319 TestNetworkPlugins/group/custom-flannel/DNS 0.13
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
322 TestNetworkPlugins/group/false/KubeletFlags 0.16
323 TestNetworkPlugins/group/false/NetCatPod 12.15
324 TestNetworkPlugins/group/enable-default-cni/Start 172.94
325 TestNetworkPlugins/group/false/DNS 0.12
326 TestNetworkPlugins/group/false/Localhost 0.1
327 TestNetworkPlugins/group/false/HairPin 0.1
328 TestNetworkPlugins/group/flannel/Start 62.44
329 TestNetworkPlugins/group/flannel/ControllerPod 6
330 TestNetworkPlugins/group/flannel/KubeletFlags 0.17
331 TestNetworkPlugins/group/flannel/NetCatPod 11.14
332 TestNetworkPlugins/group/flannel/DNS 0.13
333 TestNetworkPlugins/group/flannel/Localhost 0.1
334 TestNetworkPlugins/group/flannel/HairPin 0.1
335 TestNetworkPlugins/group/bridge/Start 172.07
336 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.17
337 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.14
338 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
339 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
340 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
341 TestNetworkPlugins/group/kubenet/Start 60.81
342 TestNetworkPlugins/group/kubenet/KubeletFlags 0.17
343 TestNetworkPlugins/group/kubenet/NetCatPod 11.15
344 TestNetworkPlugins/group/kubenet/DNS 0.13
345 TestNetworkPlugins/group/kubenet/Localhost 0.1
346 TestNetworkPlugins/group/kubenet/HairPin 0.1
347 TestNetworkPlugins/group/bridge/KubeletFlags 0.17
348 TestNetworkPlugins/group/bridge/NetCatPod 10.14
350 TestStartStop/group/old-k8s-version/serial/FirstStart 120.38
351 TestNetworkPlugins/group/bridge/DNS 0.13
352 TestNetworkPlugins/group/bridge/Localhost 0.11
353 TestNetworkPlugins/group/bridge/HairPin 0.1
355 TestStartStop/group/no-preload/serial/FirstStart 58.75
356 TestStartStop/group/no-preload/serial/DeployApp 8.21
357 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.77
358 TestStartStop/group/no-preload/serial/Stop 8.47
359 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.34
360 TestStartStop/group/no-preload/serial/SecondStart 299.02
361 TestStartStop/group/old-k8s-version/serial/DeployApp 7.33
362 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.79
363 TestStartStop/group/old-k8s-version/serial/Stop 8.42
364 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.34
365 TestStartStop/group/old-k8s-version/serial/SecondStart 390.35
366 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11
367 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
368 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.17
369 TestStartStop/group/no-preload/serial/Pause 2.02
371 TestStartStop/group/embed-certs/serial/FirstStart 55.22
372 TestStartStop/group/embed-certs/serial/DeployApp 8.21
373 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.73
374 TestStartStop/group/embed-certs/serial/Stop 8.53
375 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.34
376 TestStartStop/group/embed-certs/serial/SecondStart 294.35
377 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
378 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
379 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.18
380 TestStartStop/group/old-k8s-version/serial/Pause 2.17
382 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63.94
383 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.2
384 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.77
385 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.43
386 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.35
387 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 299.67
388 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
389 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
390 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.17
391 TestStartStop/group/embed-certs/serial/Pause 1.98
393 TestStartStop/group/newest-cni/serial/FirstStart 52.88
394 TestStartStop/group/newest-cni/serial/DeployApp 0
395 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.76
396 TestStartStop/group/newest-cni/serial/Stop 8.47
397 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.34
398 TestStartStop/group/newest-cni/serial/SecondStart 29.55
399 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
400 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
401 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.18
402 TestStartStop/group/newest-cni/serial/Pause 2.12
403 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.01
404 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
405 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.17
406 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.97
x
+
TestDownloadOnly/v1.20.0/json-events (21.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-899000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-899000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (21.19907375s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (21.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-899000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-899000: exit status 85 (299.227832ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-899000 | jenkins | v1.33.0 | 05 May 24 13:56 PDT |          |
	|         | -p download-only-899000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 13:56:24
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 13:56:24.650465   54212 out.go:291] Setting OutFile to fd 1 ...
	I0505 13:56:24.650681   54212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 13:56:24.650687   54212 out.go:304] Setting ErrFile to fd 2...
	I0505 13:56:24.650690   54212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 13:56:24.650865   54212 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
	W0505 13:56:24.650969   54212 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18602-53665/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18602-53665/.minikube/config/config.json: no such file or directory
	I0505 13:56:24.652858   54212 out.go:298] Setting JSON to true
	I0505 13:56:24.675441   54212 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":17755,"bootTime":1714924829,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0505 13:56:24.675568   54212 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 13:56:24.698215   54212 out.go:97] [download-only-899000] minikube v1.33.0 on Darwin 14.4.1
	I0505 13:56:24.718749   54212 out.go:169] MINIKUBE_LOCATION=18602
	I0505 13:56:24.698374   54212 notify.go:220] Checking for updates...
	W0505 13:56:24.698378   54212 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball: no such file or directory
	I0505 13:56:24.764036   54212 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 13:56:24.784850   54212 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0505 13:56:24.805935   54212 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 13:56:24.827670   54212 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
	W0505 13:56:24.870235   54212 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0505 13:56:24.870705   54212 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 13:56:24.902136   54212 out.go:97] Using the hyperkit driver based on user configuration
	I0505 13:56:24.902190   54212 start.go:297] selected driver: hyperkit
	I0505 13:56:24.902209   54212 start.go:901] validating driver "hyperkit" against <nil>
	I0505 13:56:24.902445   54212 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 13:56:24.902674   54212 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18602-53665/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0505 13:56:25.130842   54212 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0505 13:56:25.135380   54212 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 13:56:25.135416   54212 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0505 13:56:25.135468   54212 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 13:56:25.138553   54212 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0505 13:56:25.138698   54212 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0505 13:56:25.138737   54212 cni.go:84] Creating CNI manager for ""
	I0505 13:56:25.138752   54212 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0505 13:56:25.138818   54212 start.go:340] cluster config:
	{Name:download-only-899000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-899000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 13:56:25.139038   54212 iso.go:125] acquiring lock: {Name:mk0da19ac8d2d553b5039d86a6857a5ca35625a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 13:56:25.160107   54212 out.go:97] Downloading VM boot image ...
	I0505 13:56:25.160234   54212 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0505 13:56:33.760616   54212 out.go:97] Starting "download-only-899000" primary control-plane node in "download-only-899000" cluster
	I0505 13:56:33.760655   54212 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0505 13:56:33.814080   54212 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0505 13:56:33.814116   54212 cache.go:56] Caching tarball of preloaded images
	I0505 13:56:33.815421   54212 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0505 13:56:33.835328   54212 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0505 13:56:33.835357   54212 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0505 13:56:33.914477   54212 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0505 13:56:40.640994   54212 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0505 13:56:40.641187   54212 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0505 13:56:41.187282   54212 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0505 13:56:41.187538   54212 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/download-only-899000/config.json ...
	I0505 13:56:41.187562   54212 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/download-only-899000/config.json: {Name:mk6fee299c55f4fcf14533032474406fcb3c83cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:56:41.188841   54212 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0505 13:56:41.189256   54212 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-899000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-899000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-899000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (11.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-911000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-911000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperkit : (11.671055882s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (11.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-911000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-911000: exit status 85 (299.953603ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-899000 | jenkins | v1.33.0 | 05 May 24 13:56 PDT |                     |
	|         | -p download-only-899000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 05 May 24 13:56 PDT | 05 May 24 13:56 PDT |
	| delete  | -p download-only-899000        | download-only-899000 | jenkins | v1.33.0 | 05 May 24 13:56 PDT | 05 May 24 13:56 PDT |
	| start   | -o=json --download-only        | download-only-911000 | jenkins | v1.33.0 | 05 May 24 13:56 PDT |                     |
	|         | -p download-only-911000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/05 13:56:46
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0505 13:56:46.914876   54248 out.go:291] Setting OutFile to fd 1 ...
	I0505 13:56:46.915063   54248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 13:56:46.915069   54248 out.go:304] Setting ErrFile to fd 2...
	I0505 13:56:46.915073   54248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 13:56:46.915257   54248 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
	I0505 13:56:46.916688   54248 out.go:298] Setting JSON to true
	I0505 13:56:46.939033   54248 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":17777,"bootTime":1714924829,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0505 13:56:46.939132   54248 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 13:56:46.960744   54248 out.go:97] [download-only-911000] minikube v1.33.0 on Darwin 14.4.1
	I0505 13:56:46.981606   54248 out.go:169] MINIKUBE_LOCATION=18602
	I0505 13:56:46.960861   54248 notify.go:220] Checking for updates...
	I0505 13:56:47.023652   54248 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 13:56:47.044524   54248 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0505 13:56:47.065809   54248 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 13:56:47.086791   54248 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
	W0505 13:56:47.128681   54248 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0505 13:56:47.129208   54248 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 13:56:47.159545   54248 out.go:97] Using the hyperkit driver based on user configuration
	I0505 13:56:47.159620   54248 start.go:297] selected driver: hyperkit
	I0505 13:56:47.159636   54248 start.go:901] validating driver "hyperkit" against <nil>
	I0505 13:56:47.159844   54248 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 13:56:47.160075   54248 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18602-53665/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0505 13:56:47.170269   54248 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0505 13:56:47.174510   54248 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 13:56:47.174565   54248 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0505 13:56:47.174595   54248 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0505 13:56:47.177510   54248 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0505 13:56:47.177639   54248 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0505 13:56:47.177699   54248 cni.go:84] Creating CNI manager for ""
	I0505 13:56:47.177714   54248 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0505 13:56:47.177751   54248 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0505 13:56:47.177862   54248 start.go:340] cluster config:
	{Name:download-only-911000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-911000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 13:56:47.177948   54248 iso.go:125] acquiring lock: {Name:mk0da19ac8d2d553b5039d86a6857a5ca35625a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0505 13:56:47.198482   54248 out.go:97] Starting "download-only-911000" primary control-plane node in "download-only-911000" cluster
	I0505 13:56:47.198537   54248 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 13:56:47.256612   54248 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0505 13:56:47.256674   54248 cache.go:56] Caching tarball of preloaded images
	I0505 13:56:47.257062   54248 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 13:56:47.278670   54248 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0505 13:56:47.278725   54248 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0505 13:56:47.357556   54248 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4?checksum=md5:00b6acf85a82438f3897c0a6fafdcee7 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0505 13:56:54.471534   54248 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0505 13:56:54.471829   54248 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0505 13:56:54.962004   54248 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0505 13:56:54.962237   54248 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/download-only-911000/config.json ...
	I0505 13:56:54.962261   54248 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/download-only-911000/config.json: {Name:mk98cf258abf8803ed01c00fe2ad3ccb641e2156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0505 13:56:54.962642   54248 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0505 13:56:54.963645   54248 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18602-53665/.minikube/cache/darwin/amd64/v1.30.0/kubectl
	
	
	* The control-plane node download-only-911000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-911000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-911000
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestBinaryMirror (1.01s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-169000 --alsologtostderr --binary-mirror http://127.0.0.1:55645 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-169000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-169000
--- PASS: TestBinaryMirror (1.01s)

                                                
                                    
x
+
TestOffline (99.86s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-649000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-649000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (1m34.585160308s)
helpers_test.go:175: Cleaning up "offline-docker-649000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-649000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-649000: (5.278863991s)
--- PASS: TestOffline (99.86s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-099000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-099000: exit status 85 (217.821672ms)

                                                
                                                
-- stdout --
	* Profile "addons-099000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-099000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-099000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-099000: exit status 85 (197.473946ms)

                                                
                                                
-- stdout --
	* Profile "addons-099000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-099000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/Setup (329.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-099000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-099000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (5m29.831367133s)
--- PASS: TestAddons/Setup (329.83s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 10.117237ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-97rs7" [d81962e9-0cb7-4acc-bc42-fbcf1c454dd3] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005109791s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-b22p8" [d58cacac-f33d-45ab-82d5-975d2482e444] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002910261s
addons_test.go:342: (dbg) Run:  kubectl --context addons-099000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-099000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-099000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.313545671s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p addons-099000 ip
2024/05/05 14:02:44 [DEBUG] GET http://192.169.0.48:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p addons-099000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.96s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-099000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-099000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-099000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3a2d95fd-eafd-440d-bc2d-8779a19aca61] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3a2d95fd-eafd-440d-bc2d-8779a19aca61] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004057554s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-099000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-099000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p addons-099000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.169.0.48
addons_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p addons-099000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-darwin-amd64 -p addons-099000 addons disable ingress-dns --alsologtostderr -v=1: (1.592111061s)
addons_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p addons-099000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p addons-099000 addons disable ingress --alsologtostderr -v=1: (7.497932088s)
--- PASS: TestAddons/parallel/Ingress (19.00s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.49s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xh82z" [91eed0b0-a2f8-4ae3-aadb-1e6f0b0cbb8d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004634912s
addons_test.go:843: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-099000
addons_test.go:843: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-099000: (5.48334552s)
--- PASS: TestAddons/parallel/InspektorGadget (10.49s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.653513ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-64q6b" [2dee9f19-5039-42e4-ac50-46d77ec61f7e] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004397556s
addons_test.go:417: (dbg) Run:  kubectl --context addons-099000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-099000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.53s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.76s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.521309ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-k8w6g" [00fd28a9-8ff7-4d5f-9fcb-84ba7f593e08] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003335618s
addons_test.go:475: (dbg) Run:  kubectl --context addons-099000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-099000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.341228392s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-099000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 3.705635ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-099000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-099000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4aa475ab-6152-40cb-a83c-47a5201b0a95] Pending
helpers_test.go:344: "task-pv-pod" [4aa475ab-6152-40cb-a83c-47a5201b0a95] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4aa475ab-6152-40cb-a83c-47a5201b0a95] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004503644s
addons_test.go:586: (dbg) Run:  kubectl --context addons-099000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-099000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-099000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-099000 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-099000 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-099000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-099000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d16bd675-7fb1-4309-8c26-91d070323653] Pending
helpers_test.go:344: "task-pv-pod-restore" [d16bd675-7fb1-4309-8c26-91d070323653] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d16bd675-7fb1-4309-8c26-91d070323653] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004810975s
addons_test.go:628: (dbg) Run:  kubectl --context addons-099000 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-099000 delete pod task-pv-pod-restore: (1.050933772s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-099000 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-099000 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-darwin-amd64 -p addons-099000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-darwin-amd64 -p addons-099000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.383064091s)
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-099000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (60.69s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-099000 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-wdm5h" [32a79225-b6da-4c20-8133-ad9e2ec34767] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-wdm5h" [32a79225-b6da-4c20-8133-ad9e2ec34767] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004675166s
--- PASS: TestAddons/parallel/Headlamp (11.93s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dc8d859f6-7cqtp" [e59e9d43-f46c-45bf-bea2-e253cdc89e62] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.002920847s
addons_test.go:862: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-099000
--- PASS: TestAddons/parallel/CloudSpanner (6.42s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.48s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-099000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-099000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-099000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9c82e13e-2af5-4492-ae55-2763fe39d85a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9c82e13e-2af5-4492-ae55-2763fe39d85a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9c82e13e-2af5-4492-ae55-2763fe39d85a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.002519832s
addons_test.go:992: (dbg) Run:  kubectl --context addons-099000 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-darwin-amd64 -p addons-099000 ssh "cat /opt/local-path-provisioner/pvc-cd637d17-1ef5-45e6-aede-09374c0572f1_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-099000 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-099000 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-darwin-amd64 -p addons-099000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-darwin-amd64 -p addons-099000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.818071517s)
--- PASS: TestAddons/parallel/LocalPath (53.48s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-kxhvx" [52db28ab-9064-478a-9c83-8ee34be6d417] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004019258s
addons_test.go:1056: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-099000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.35s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-2gx57" [8dcc6768-874d-4aae-84ee-1ae107211621] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.006011072s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (39.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:897: volcano-admission stabilized in 2.107524ms
addons_test.go:889: volcano-scheduler stabilized in 2.140077ms
addons_test.go:905: volcano-controller stabilized in 2.457365ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-765f888978-6xz7l" [e72e9437-f737-4415-9cd5-59b01aceaa8b] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.004800298s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-7b497cf95b-mkvfj" [9809d2cb-54e4-40b0-b46d-a265c0aba6e8] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.004639972s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controller-86c5446455-2cqvl" [0dcc604d-12a7-4dd6-9935-e9e6aa33a6fc] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.003929186s
addons_test.go:924: (dbg) Run:  kubectl --context addons-099000 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-099000 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-099000 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [d6c75f2e-147f-4cbd-bd03-0d80fe3f6895] Pending
helpers_test.go:344: "test-job-nginx-0" [d6c75f2e-147f-4cbd-bd03-0d80fe3f6895] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [d6c75f2e-147f-4cbd-bd03-0d80fe3f6895] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 15.003812065s
addons_test.go:960: (dbg) Run:  out/minikube-darwin-amd64 -p addons-099000 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-darwin-amd64 -p addons-099000 addons disable volcano --alsologtostderr -v=1: (9.506794644s)
--- PASS: TestAddons/parallel/Volcano (39.75s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-099000 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-099000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.95s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-099000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-099000: (5.386830103s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-099000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-099000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-099000
--- PASS: TestAddons/StoppedEnableDisable (5.95s)

                                                
                                    
x
+
TestCertOptions (159.09s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-987000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-987000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (2m33.433271461s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-987000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-987000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-987000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-987000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-987000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-987000: (5.286072297s)
--- PASS: TestCertOptions (159.09s)

                                                
                                    
x
+
TestCertExpiration (247.38s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-724000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E0505 15:06:03.285218   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-724000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (36.617659905s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-724000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-724000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (25.475934075s)
helpers_test.go:175: Cleaning up "cert-expiration-724000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-724000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-724000: (5.286048579s)
--- PASS: TestCertExpiration (247.38s)

                                                
                                    
x
+
TestDockerFlags (42.55s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-690000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-690000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (38.775413901s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-690000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-690000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-690000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-690000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-690000: (3.435088179s)
--- PASS: TestDockerFlags (42.55s)

                                                
                                    
x
+
TestForceSystemdFlag (40.56s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-033000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
E0505 15:04:41.364508   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-033000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (36.95714537s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-033000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-033000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-033000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-033000: (3.430423746s)
--- PASS: TestForceSystemdFlag (40.56s)

                                                
                                    
x
+
TestForceSystemdEnv (42.79s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-033000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-033000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (37.323826712s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-033000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-033000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-033000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-033000: (5.281099355s)
--- PASS: TestForceSystemdEnv (42.79s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.23s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.23s)

                                                
                                    
x
+
TestErrorSpam/setup (35.03s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-594000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-594000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 --driver=hyperkit : (35.028994193s)
--- PASS: TestErrorSpam/setup (35.03s)

                                                
                                    
x
+
TestErrorSpam/start (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-594000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-594000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-594000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 start --dry-run
--- PASS: TestErrorSpam/start (1.70s)

                                                
                                    
x
+
TestErrorSpam/status (0.51s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-594000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-594000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-594000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 status
--- PASS: TestErrorSpam/status (0.51s)

                                                
                                    
x
+
TestErrorSpam/pause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-594000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-594000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-594000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 pause
--- PASS: TestErrorSpam/pause (1.34s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-594000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-594000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-594000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 unpause
--- PASS: TestErrorSpam/unpause (1.35s)

                                                
                                    
x
+
TestErrorSpam/stop (153.82s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-594000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-594000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 stop: (3.393384397s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-594000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-594000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 stop: (1m15.211179228s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-594000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 stop
E0505 14:07:31.473530   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:07:31.482699   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:07:31.493656   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:07:31.515513   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:07:31.557757   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:07:31.639299   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:07:31.801486   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:07:32.121947   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:07:32.764229   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:07:34.045255   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:07:36.607528   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:07:41.728489   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:07:51.970714   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:08:12.452919   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-594000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-594000 stop: (1m15.213113693s)
--- PASS: TestErrorSpam/stop (153.82s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18602-53665/.minikube/files/etc/test/nested/copy/54210/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (208.55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-341000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0505 14:08:53.413565   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:10:15.335499   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-341000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (3m28.55177749s)
--- PASS: TestFunctional/serial/StartWithProxy (208.55s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-341000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-341000 --alsologtostderr -v=8: (39.82564146s)
functional_test.go:659: soft start took 39.826041851s for "functional-341000" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-341000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-341000 cache add registry.k8s.io/pause:3.1: (1.011605968s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-341000 cache add registry.k8s.io/pause:3.3: (1.026138118s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-341000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local4091225384/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 cache add minikube-local-cache-test:functional-341000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 cache delete minikube-local-cache-test:functional-341000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-341000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-341000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (153.831234ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 kubectl -- --context functional-341000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.95s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-341000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-341000 get pods: (1.37455982s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.38s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.93s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-341000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0505 14:12:31.470588   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:12:59.176985   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-341000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.933133073s)
functional_test.go:757: restart took 41.933275137s for "functional-341000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.93s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-341000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-341000 logs: (2.648315726s)
--- PASS: TestFunctional/serial/LogsCmd (2.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd3776022103/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-341000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd3776022103/001/logs.txt: (2.775054281s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.78s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-341000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-341000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-341000: exit status 115 (278.754542ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.50:31678 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-341000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-341000 config get cpus: exit status 14 (71.53075ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-341000 config get cpus: exit status 14 (65.739263ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-341000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-341000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 55471: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.99s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-341000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-341000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (494.135994ms)

                                                
                                                
-- stdout --
	* [functional-341000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:14:18.903228   55432 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:14:18.903389   55432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:14:18.903395   55432 out.go:304] Setting ErrFile to fd 2...
	I0505 14:14:18.903398   55432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:14:18.903575   55432 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
	I0505 14:14:18.904963   55432 out.go:298] Setting JSON to false
	I0505 14:14:18.927372   55432 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":18829,"bootTime":1714924829,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0505 14:14:18.927468   55432 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:14:18.949238   55432 out.go:177] * [functional-341000] minikube v1.33.0 on Darwin 14.4.1
	I0505 14:14:18.990919   55432 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:14:19.011803   55432 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:14:18.991056   55432 notify.go:220] Checking for updates...
	I0505 14:14:19.032915   55432 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0505 14:14:19.053918   55432 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:14:19.074746   55432 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
	I0505 14:14:19.095876   55432 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:14:19.117495   55432 config.go:182] Loaded profile config "functional-341000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:14:19.118222   55432 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:14:19.118290   55432 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:14:19.127651   55432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56709
	I0505 14:14:19.128006   55432 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:14:19.128419   55432 main.go:141] libmachine: Using API Version  1
	I0505 14:14:19.128427   55432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:14:19.128627   55432 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:14:19.128738   55432 main.go:141] libmachine: (functional-341000) Calling .DriverName
	I0505 14:14:19.128952   55432 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:14:19.129202   55432 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:14:19.129221   55432 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:14:19.137731   55432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56711
	I0505 14:14:19.138074   55432 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:14:19.138399   55432 main.go:141] libmachine: Using API Version  1
	I0505 14:14:19.138419   55432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:14:19.138596   55432 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:14:19.138706   55432 main.go:141] libmachine: (functional-341000) Calling .DriverName
	I0505 14:14:19.166788   55432 out.go:177] * Using the hyperkit driver based on existing profile
	I0505 14:14:19.208788   55432 start.go:297] selected driver: hyperkit
	I0505 14:14:19.208809   55432 start.go:901] validating driver "hyperkit" against &{Name:functional-341000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.0 ClusterName:functional-341000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.50 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:14:19.208995   55432 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:14:19.233794   55432 out.go:177] 
	W0505 14:14:19.254864   55432 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0505 14:14:19.275909   55432 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-341000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-341000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-341000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (548.40595ms)

                                                
                                                
-- stdout --
	* [functional-341000] minikube v1.33.0 sur Darwin 14.4.1
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:14:18.348814   55425 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:14:18.349019   55425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:14:18.349025   55425 out.go:304] Setting ErrFile to fd 2...
	I0505 14:14:18.349029   55425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:14:18.349240   55425 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
	I0505 14:14:18.350879   55425 out.go:298] Setting JSON to false
	I0505 14:14:18.373669   55425 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":18829,"bootTime":1714924829,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0505 14:14:18.373750   55425 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0505 14:14:18.397567   55425 out.go:177] * [functional-341000] minikube v1.33.0 sur Darwin 14.4.1
	I0505 14:14:18.439319   55425 out.go:177]   - MINIKUBE_LOCATION=18602
	I0505 14:14:18.439325   55425 notify.go:220] Checking for updates...
	I0505 14:14:18.481204   55425 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
	I0505 14:14:18.523240   55425 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0505 14:14:18.544259   55425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0505 14:14:18.565282   55425 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
	I0505 14:14:18.586318   55425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0505 14:14:18.607545   55425 config.go:182] Loaded profile config "functional-341000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:14:18.607904   55425 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:14:18.607951   55425 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:14:18.617142   55425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56704
	I0505 14:14:18.617489   55425 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:14:18.617901   55425 main.go:141] libmachine: Using API Version  1
	I0505 14:14:18.617917   55425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:14:18.618134   55425 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:14:18.618240   55425 main.go:141] libmachine: (functional-341000) Calling .DriverName
	I0505 14:14:18.618429   55425 driver.go:392] Setting default libvirt URI to qemu:///system
	I0505 14:14:18.618688   55425 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:14:18.618710   55425 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:14:18.627704   55425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56706
	I0505 14:14:18.628089   55425 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:14:18.628453   55425 main.go:141] libmachine: Using API Version  1
	I0505 14:14:18.628482   55425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:14:18.628714   55425 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:14:18.628839   55425 main.go:141] libmachine: (functional-341000) Calling .DriverName
	I0505 14:14:18.657447   55425 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0505 14:14:18.731192   55425 start.go:297] selected driver: hyperkit
	I0505 14:14:18.731209   55425 start.go:901] validating driver "hyperkit" against &{Name:functional-341000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.0 ClusterName:functional-341000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.50 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0505 14:14:18.731320   55425 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0505 14:14:18.755352   55425 out.go:177] 
	W0505 14:14:18.776315   55425 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0505 14:14:18.797264   55425 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-341000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-341000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-2p9f5" [d5da3b5d-6e0a-49cf-b287-4deb51118de2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-2p9f5" [d5da3b5d-6e0a-49cf-b287-4deb51118de2] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.004005513s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.169.0.50:31722
functional_test.go:1671: http://192.169.0.50:31722: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-2p9f5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.50:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.50:31722
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.37s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7860c43a-4a4f-4905-95f0-b9229d6a7600] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003535431s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-341000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-341000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-341000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-341000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3473b86e-9b9c-40a5-9300-8e89a007464d] Pending
helpers_test.go:344: "sp-pod" [3473b86e-9b9c-40a5-9300-8e89a007464d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3473b86e-9b9c-40a5-9300-8e89a007464d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003741292s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-341000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-341000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-341000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ac018f58-ec1a-4258-aceb-131d7fc2e6a5] Pending
helpers_test.go:344: "sp-pod" [ac018f58-ec1a-4258-aceb-131d7fc2e6a5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ac018f58-ec1a-4258-aceb-131d7fc2e6a5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004564483s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-341000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.40s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh -n functional-341000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 cp functional-341000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd341306013/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh -n functional-341000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh -n functional-341000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-341000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-fc8q2" [c277405c-fabb-4130-8177-d17193e9b920] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-fc8q2" [c277405c-fabb-4130-8177-d17193e9b920] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.003758171s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-341000 exec mysql-64454c8b5c-fc8q2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-341000 exec mysql-64454c8b5c-fc8q2 -- mysql -ppassword -e "show databases;": exit status 1 (142.672681ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-341000 exec mysql-64454c8b5c-fc8q2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-341000 exec mysql-64454c8b5c-fc8q2 -- mysql -ppassword -e "show databases;": exit status 1 (125.786236ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-341000 exec mysql-64454c8b5c-fc8q2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-341000 exec mysql-64454c8b5c-fc8q2 -- mysql -ppassword -e "show databases;": exit status 1 (99.833687ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-341000 exec mysql-64454c8b5c-fc8q2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/54210/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "sudo cat /etc/test/nested/copy/54210/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/54210.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "sudo cat /etc/ssl/certs/54210.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/54210.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "sudo cat /usr/share/ca-certificates/54210.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/542102.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "sudo cat /etc/ssl/certs/542102.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/542102.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "sudo cat /usr/share/ca-certificates/542102.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-341000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-341000 ssh "sudo systemctl is-active crio": exit status 1 (139.179116ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-341000 docker-env) && out/minikube-darwin-amd64 status -p functional-341000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-341000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-341000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-341000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-341000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 55236: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-341000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-341000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (23.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-341000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [9b789ed3-657b-4067-aaba-83f19799d36b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [9b789ed3-657b-4067-aaba-83f19799d36b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 23.003666738s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (23.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-341000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.2.38 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-341000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-341000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-341000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-b5w9p" [91c1452f-7672-4d9b-a09d-d09b568f627b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-b5w9p" [91c1452f-7672-4d9b-a09d-d09b568f627b] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003822726s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 service list -o json
functional_test.go:1490: Took "784.450057ms" to run "out/minikube-darwin-amd64 -p functional-341000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.169.0.50:30105
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.169.0.50:30105
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "217.843914ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "86.356929ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "213.794089ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "87.254534ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-341000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3246279173/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714943655100997000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3246279173/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714943655100997000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3246279173/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714943655100997000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3246279173/001/test-1714943655100997000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-341000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (177.649065ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May  5 21:14 created-by-test
-rw-r--r-- 1 docker docker 24 May  5 21:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May  5 21:14 test-1714943655100997000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh cat /mount-9p/test-1714943655100997000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-341000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1d5f268c-cd62-4647-8f3f-3b6056088311] Pending
helpers_test.go:344: "busybox-mount" [1d5f268c-cd62-4647-8f3f-3b6056088311] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1d5f268c-cd62-4647-8f3f-3b6056088311] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1d5f268c-cd62-4647-8f3f-3b6056088311] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.002350757s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-341000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-341000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3246279173/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-341000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port2925982526/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-341000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (165.974349ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-341000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port2925982526/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-341000 ssh "sudo umount -f /mount-9p": exit status 1 (135.985122ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-341000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-341000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port2925982526/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-341000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1275588178/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-341000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1275588178/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-341000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1275588178/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-341000 ssh "findmnt -T" /mount1: exit status 1 (209.34802ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-341000 ssh "findmnt -T" /mount1: exit status 1 (209.784811ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-341000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-341000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1275588178/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-341000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1275588178/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-341000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1275588178/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-341000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-341000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-341000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-341000 image ls --format short --alsologtostderr:
I0505 14:14:41.162438   55638 out.go:291] Setting OutFile to fd 1 ...
I0505 14:14:41.162608   55638 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:14:41.162613   55638 out.go:304] Setting ErrFile to fd 2...
I0505 14:14:41.162617   55638 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:14:41.162783   55638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
I0505 14:14:41.163529   55638 config.go:182] Loaded profile config "functional-341000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:14:41.163626   55638 config.go:182] Loaded profile config "functional-341000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:14:41.164016   55638 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:14:41.164056   55638 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:14:41.172346   55638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56974
I0505 14:14:41.172696   55638 main.go:141] libmachine: () Calling .GetVersion
I0505 14:14:41.173127   55638 main.go:141] libmachine: Using API Version  1
I0505 14:14:41.173137   55638 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:14:41.173361   55638 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:14:41.173483   55638 main.go:141] libmachine: (functional-341000) Calling .GetState
I0505 14:14:41.173595   55638 main.go:141] libmachine: (functional-341000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:14:41.173688   55638 main.go:141] libmachine: (functional-341000) DBG | hyperkit pid from json: 54844
I0505 14:14:41.174896   55638 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:14:41.174924   55638 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:14:41.183655   55638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56978
I0505 14:14:41.184059   55638 main.go:141] libmachine: () Calling .GetVersion
I0505 14:14:41.184362   55638 main.go:141] libmachine: Using API Version  1
I0505 14:14:41.184389   55638 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:14:41.184650   55638 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:14:41.184776   55638 main.go:141] libmachine: (functional-341000) Calling .DriverName
I0505 14:14:41.184930   55638 ssh_runner.go:195] Run: systemctl --version
I0505 14:14:41.184946   55638 main.go:141] libmachine: (functional-341000) Calling .GetSSHHostname
I0505 14:14:41.185042   55638 main.go:141] libmachine: (functional-341000) Calling .GetSSHPort
I0505 14:14:41.185127   55638 main.go:141] libmachine: (functional-341000) Calling .GetSSHKeyPath
I0505 14:14:41.185207   55638 main.go:141] libmachine: (functional-341000) Calling .GetSSHUsername
I0505 14:14:41.185289   55638 sshutil.go:53] new ssh client: &{IP:192.169.0.50 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/functional-341000/id_rsa Username:docker}
I0505 14:14:41.220503   55638 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0505 14:14:41.236697   55638 main.go:141] libmachine: Making call to close driver server
I0505 14:14:41.236707   55638 main.go:141] libmachine: (functional-341000) Calling .Close
I0505 14:14:41.236870   55638 main.go:141] libmachine: (functional-341000) DBG | Closing plugin on server side
I0505 14:14:41.236884   55638 main.go:141] libmachine: Successfully made call to close driver server
I0505 14:14:41.236897   55638 main.go:141] libmachine: Making call to close connection to plugin binary
I0505 14:14:41.236907   55638 main.go:141] libmachine: Making call to close driver server
I0505 14:14:41.236914   55638 main.go:141] libmachine: (functional-341000) Calling .Close
I0505 14:14:41.237069   55638 main.go:141] libmachine: Successfully made call to close driver server
I0505 14:14:41.237077   55638 main.go:141] libmachine: (functional-341000) DBG | Closing plugin on server side
I0505 14:14:41.237082   55638 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-341000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer      | functional-341000 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/minikube-local-cache-test | functional-341000 | 83a9f016d0b2a | 30B    |
| registry.k8s.io/kube-scheduler              | v1.30.0           | 259c8277fcbbc | 62MB   |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/nginx                     | alpine            | f4215f6ee683f | 48.3MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/library/nginx                     | latest            | 7383c266ef252 | 188MB  |
| registry.k8s.io/kube-apiserver              | v1.30.0           | c42f13656d0b2 | 117MB  |
| registry.k8s.io/kube-proxy                  | v1.30.0           | a0bf559e280cf | 84.7MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-controller-manager     | v1.30.0           | c7aad43836fa5 | 111MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-341000 image ls --format table --alsologtostderr:
I0505 14:14:41.351052   55646 out.go:291] Setting OutFile to fd 1 ...
I0505 14:14:41.351323   55646 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:14:41.351328   55646 out.go:304] Setting ErrFile to fd 2...
I0505 14:14:41.351332   55646 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:14:41.351505   55646 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
I0505 14:14:41.352189   55646 config.go:182] Loaded profile config "functional-341000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:14:41.352292   55646 config.go:182] Loaded profile config "functional-341000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:14:41.352644   55646 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:14:41.352686   55646 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:14:41.361313   55646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56988
I0505 14:14:41.361716   55646 main.go:141] libmachine: () Calling .GetVersion
I0505 14:14:41.362119   55646 main.go:141] libmachine: Using API Version  1
I0505 14:14:41.362150   55646 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:14:41.362361   55646 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:14:41.362487   55646 main.go:141] libmachine: (functional-341000) Calling .GetState
I0505 14:14:41.362577   55646 main.go:141] libmachine: (functional-341000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:14:41.362651   55646 main.go:141] libmachine: (functional-341000) DBG | hyperkit pid from json: 54844
I0505 14:14:41.363852   55646 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:14:41.363876   55646 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:14:41.372386   55646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56990
I0505 14:14:41.372728   55646 main.go:141] libmachine: () Calling .GetVersion
I0505 14:14:41.373106   55646 main.go:141] libmachine: Using API Version  1
I0505 14:14:41.373126   55646 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:14:41.373328   55646 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:14:41.373439   55646 main.go:141] libmachine: (functional-341000) Calling .DriverName
I0505 14:14:41.373607   55646 ssh_runner.go:195] Run: systemctl --version
I0505 14:14:41.373627   55646 main.go:141] libmachine: (functional-341000) Calling .GetSSHHostname
I0505 14:14:41.373721   55646 main.go:141] libmachine: (functional-341000) Calling .GetSSHPort
I0505 14:14:41.373805   55646 main.go:141] libmachine: (functional-341000) Calling .GetSSHKeyPath
I0505 14:14:41.373885   55646 main.go:141] libmachine: (functional-341000) Calling .GetSSHUsername
I0505 14:14:41.373972   55646 sshutil.go:53] new ssh client: &{IP:192.169.0.50 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/functional-341000/id_rsa Username:docker}
I0505 14:14:41.409970   55646 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0505 14:14:41.428715   55646 main.go:141] libmachine: Making call to close driver server
I0505 14:14:41.428736   55646 main.go:141] libmachine: (functional-341000) Calling .Close
I0505 14:14:41.428880   55646 main.go:141] libmachine: Successfully made call to close driver server
I0505 14:14:41.428888   55646 main.go:141] libmachine: Making call to close connection to plugin binary
I0505 14:14:41.428895   55646 main.go:141] libmachine: Making call to close driver server
I0505 14:14:41.428901   55646 main.go:141] libmachine: (functional-341000) Calling .Close
I0505 14:14:41.428918   55646 main.go:141] libmachine: (functional-341000) DBG | Closing plugin on server side
I0505 14:14:41.429057   55646 main.go:141] libmachine: Successfully made call to close driver server
I0505 14:14:41.429061   55646 main.go:141] libmachine: (functional-341000) DBG | Closing plugin on server side
I0505 14:14:41.429075   55646 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-341000 image ls --format json --alsologtostderr:
[{"id":"f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"111000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-341000"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"62000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"e6f1816883972d4be47
bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538
410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"83a9f016d0b2a34ea6e5d9f90f6b37a1dbc4b95d7574946bf7cb2a45fc2d0aeb","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-341000"],"size":"30"},{"id":"7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"84700000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v
1.11.1"],"size":"59800000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-341000 image ls --format json --alsologtostderr:
I0505 14:14:41.331115   55645 out.go:291] Setting OutFile to fd 1 ...
I0505 14:14:41.331320   55645 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:14:41.331326   55645 out.go:304] Setting ErrFile to fd 2...
I0505 14:14:41.331329   55645 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:14:41.331520   55645 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
I0505 14:14:41.332159   55645 config.go:182] Loaded profile config "functional-341000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:14:41.332251   55645 config.go:182] Loaded profile config "functional-341000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:14:41.332590   55645 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:14:41.332633   55645 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:14:41.341401   55645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56983
I0505 14:14:41.341794   55645 main.go:141] libmachine: () Calling .GetVersion
I0505 14:14:41.342252   55645 main.go:141] libmachine: Using API Version  1
I0505 14:14:41.342262   55645 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:14:41.342484   55645 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:14:41.342610   55645 main.go:141] libmachine: (functional-341000) Calling .GetState
I0505 14:14:41.342690   55645 main.go:141] libmachine: (functional-341000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:14:41.342765   55645 main.go:141] libmachine: (functional-341000) DBG | hyperkit pid from json: 54844
I0505 14:14:41.343975   55645 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:14:41.344000   55645 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:14:41.352508   55645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56985
I0505 14:14:41.352865   55645 main.go:141] libmachine: () Calling .GetVersion
I0505 14:14:41.353251   55645 main.go:141] libmachine: Using API Version  1
I0505 14:14:41.353280   55645 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:14:41.353522   55645 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:14:41.353649   55645 main.go:141] libmachine: (functional-341000) Calling .DriverName
I0505 14:14:41.353808   55645 ssh_runner.go:195] Run: systemctl --version
I0505 14:14:41.353824   55645 main.go:141] libmachine: (functional-341000) Calling .GetSSHHostname
I0505 14:14:41.353909   55645 main.go:141] libmachine: (functional-341000) Calling .GetSSHPort
I0505 14:14:41.353992   55645 main.go:141] libmachine: (functional-341000) Calling .GetSSHKeyPath
I0505 14:14:41.354077   55645 main.go:141] libmachine: (functional-341000) Calling .GetSSHUsername
I0505 14:14:41.354181   55645 sshutil.go:53] new ssh client: &{IP:192.169.0.50 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/functional-341000/id_rsa Username:docker}
I0505 14:14:41.387086   55645 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0505 14:14:41.406019   55645 main.go:141] libmachine: Making call to close driver server
I0505 14:14:41.406027   55645 main.go:141] libmachine: (functional-341000) Calling .Close
I0505 14:14:41.406175   55645 main.go:141] libmachine: Successfully made call to close driver server
I0505 14:14:41.406187   55645 main.go:141] libmachine: Making call to close connection to plugin binary
I0505 14:14:41.406207   55645 main.go:141] libmachine: (functional-341000) DBG | Closing plugin on server side
I0505 14:14:41.406220   55645 main.go:141] libmachine: Making call to close driver server
I0505 14:14:41.406232   55645 main.go:141] libmachine: (functional-341000) Calling .Close
I0505 14:14:41.406354   55645 main.go:141] libmachine: Successfully made call to close driver server
I0505 14:14:41.406361   55645 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-341000 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "62000000"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "111000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-341000
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 83a9f016d0b2a34ea6e5d9f90f6b37a1dbc4b95d7574946bf7cb2a45fc2d0aeb
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-341000
size: "30"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "84700000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-341000 image ls --format yaml --alsologtostderr:
I0505 14:14:41.162160   55637 out.go:291] Setting OutFile to fd 1 ...
I0505 14:14:41.162463   55637 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:14:41.162468   55637 out.go:304] Setting ErrFile to fd 2...
I0505 14:14:41.162472   55637 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:14:41.162662   55637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
I0505 14:14:41.163246   55637 config.go:182] Loaded profile config "functional-341000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:14:41.163341   55637 config.go:182] Loaded profile config "functional-341000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:14:41.163721   55637 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:14:41.163765   55637 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:14:41.172263   55637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56973
I0505 14:14:41.172659   55637 main.go:141] libmachine: () Calling .GetVersion
I0505 14:14:41.173133   55637 main.go:141] libmachine: Using API Version  1
I0505 14:14:41.173150   55637 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:14:41.173370   55637 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:14:41.173493   55637 main.go:141] libmachine: (functional-341000) Calling .GetState
I0505 14:14:41.173599   55637 main.go:141] libmachine: (functional-341000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:14:41.173676   55637 main.go:141] libmachine: (functional-341000) DBG | hyperkit pid from json: 54844
I0505 14:14:41.175005   55637 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:14:41.175034   55637 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:14:41.183620   55637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56977
I0505 14:14:41.183964   55637 main.go:141] libmachine: () Calling .GetVersion
I0505 14:14:41.184314   55637 main.go:141] libmachine: Using API Version  1
I0505 14:14:41.184329   55637 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:14:41.184530   55637 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:14:41.184636   55637 main.go:141] libmachine: (functional-341000) Calling .DriverName
I0505 14:14:41.184798   55637 ssh_runner.go:195] Run: systemctl --version
I0505 14:14:41.184813   55637 main.go:141] libmachine: (functional-341000) Calling .GetSSHHostname
I0505 14:14:41.184900   55637 main.go:141] libmachine: (functional-341000) Calling .GetSSHPort
I0505 14:14:41.184980   55637 main.go:141] libmachine: (functional-341000) Calling .GetSSHKeyPath
I0505 14:14:41.185070   55637 main.go:141] libmachine: (functional-341000) Calling .GetSSHUsername
I0505 14:14:41.185170   55637 sshutil.go:53] new ssh client: &{IP:192.169.0.50 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/functional-341000/id_rsa Username:docker}
I0505 14:14:41.220675   55637 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0505 14:14:41.236596   55637 main.go:141] libmachine: Making call to close driver server
I0505 14:14:41.236618   55637 main.go:141] libmachine: (functional-341000) Calling .Close
I0505 14:14:41.236789   55637 main.go:141] libmachine: Successfully made call to close driver server
I0505 14:14:41.236794   55637 main.go:141] libmachine: (functional-341000) DBG | Closing plugin on server side
I0505 14:14:41.236799   55637 main.go:141] libmachine: Making call to close connection to plugin binary
I0505 14:14:41.236807   55637 main.go:141] libmachine: Making call to close driver server
I0505 14:14:41.236812   55637 main.go:141] libmachine: (functional-341000) Calling .Close
I0505 14:14:41.236966   55637 main.go:141] libmachine: Successfully made call to close driver server
I0505 14:14:41.236971   55637 main.go:141] libmachine: (functional-341000) DBG | Closing plugin on server side
I0505 14:14:41.236979   55637 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-341000 ssh pgrep buildkitd: exit status 1 (132.507356ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 image build -t localhost/my-image:functional-341000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-341000 image build -t localhost/my-image:functional-341000 testdata/build --alsologtostderr: (1.629479442s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-341000 image build -t localhost/my-image:functional-341000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 8693dfb9bbcb
---> Removed intermediate container 8693dfb9bbcb
---> a663c327e21b
Step 3/3 : ADD content.txt /
---> f4d1d3c33cf0
Successfully built f4d1d3c33cf0
Successfully tagged localhost/my-image:functional-341000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-341000 image build -t localhost/my-image:functional-341000 testdata/build --alsologtostderr:
I0505 14:14:41.628856   55658 out.go:291] Setting OutFile to fd 1 ...
I0505 14:14:41.629064   55658 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:14:41.629070   55658 out.go:304] Setting ErrFile to fd 2...
I0505 14:14:41.629074   55658 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0505 14:14:41.629263   55658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
I0505 14:14:41.629887   55658 config.go:182] Loaded profile config "functional-341000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:14:41.630526   55658 config.go:182] Loaded profile config "functional-341000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0505 14:14:41.630879   55658 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:14:41.630925   55658 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:14:41.639365   55658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57000
I0505 14:14:41.639819   55658 main.go:141] libmachine: () Calling .GetVersion
I0505 14:14:41.640226   55658 main.go:141] libmachine: Using API Version  1
I0505 14:14:41.640237   55658 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:14:41.640461   55658 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:14:41.640577   55658 main.go:141] libmachine: (functional-341000) Calling .GetState
I0505 14:14:41.640658   55658 main.go:141] libmachine: (functional-341000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0505 14:14:41.640726   55658 main.go:141] libmachine: (functional-341000) DBG | hyperkit pid from json: 54844
I0505 14:14:41.641907   55658 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0505 14:14:41.641929   55658 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0505 14:14:41.650393   55658 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57002
I0505 14:14:41.650734   55658 main.go:141] libmachine: () Calling .GetVersion
I0505 14:14:41.651090   55658 main.go:141] libmachine: Using API Version  1
I0505 14:14:41.651110   55658 main.go:141] libmachine: () Calling .SetConfigRaw
I0505 14:14:41.651330   55658 main.go:141] libmachine: () Calling .GetMachineName
I0505 14:14:41.651437   55658 main.go:141] libmachine: (functional-341000) Calling .DriverName
I0505 14:14:41.651602   55658 ssh_runner.go:195] Run: systemctl --version
I0505 14:14:41.651619   55658 main.go:141] libmachine: (functional-341000) Calling .GetSSHHostname
I0505 14:14:41.651695   55658 main.go:141] libmachine: (functional-341000) Calling .GetSSHPort
I0505 14:14:41.651771   55658 main.go:141] libmachine: (functional-341000) Calling .GetSSHKeyPath
I0505 14:14:41.651852   55658 main.go:141] libmachine: (functional-341000) Calling .GetSSHUsername
I0505 14:14:41.651941   55658 sshutil.go:53] new ssh client: &{IP:192.169.0.50 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/functional-341000/id_rsa Username:docker}
I0505 14:14:41.684208   55658 build_images.go:161] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3543803061.tar
I0505 14:14:41.684280   55658 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0505 14:14:41.692827   55658 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3543803061.tar
I0505 14:14:41.696118   55658 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3543803061.tar: stat -c "%s %y" /var/lib/minikube/build/build.3543803061.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3543803061.tar': No such file or directory
I0505 14:14:41.696148   55658 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3543803061.tar --> /var/lib/minikube/build/build.3543803061.tar (3072 bytes)
I0505 14:14:41.717061   55658 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3543803061
I0505 14:14:41.725305   55658 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3543803061 -xf /var/lib/minikube/build/build.3543803061.tar
I0505 14:14:41.733276   55658 docker.go:360] Building image: /var/lib/minikube/build/build.3543803061
I0505 14:14:41.733340   55658 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-341000 /var/lib/minikube/build/build.3543803061
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0505 14:14:43.150449   55658 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-341000 /var/lib/minikube/build/build.3543803061: (1.417094393s)
I0505 14:14:43.150516   55658 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3543803061
I0505 14:14:43.158547   55658 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3543803061.tar
I0505 14:14:43.165987   55658 build_images.go:217] Built localhost/my-image:functional-341000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3543803061.tar
I0505 14:14:43.166009   55658 build_images.go:133] succeeded building to: functional-341000
I0505 14:14:43.166014   55658 build_images.go:134] failed building to: 
I0505 14:14:43.166030   55658 main.go:141] libmachine: Making call to close driver server
I0505 14:14:43.166036   55658 main.go:141] libmachine: (functional-341000) Calling .Close
I0505 14:14:43.166207   55658 main.go:141] libmachine: Successfully made call to close driver server
I0505 14:14:43.166221   55658 main.go:141] libmachine: Making call to close connection to plugin binary
I0505 14:14:43.166228   55658 main.go:141] libmachine: Making call to close driver server
I0505 14:14:43.166233   55658 main.go:141] libmachine: (functional-341000) Calling .Close
I0505 14:14:43.166238   55658 main.go:141] libmachine: (functional-341000) DBG | Closing plugin on server side
I0505 14:14:43.166358   55658 main.go:141] libmachine: (functional-341000) DBG | Closing plugin on server side
I0505 14:14:43.166368   55658 main.go:141] libmachine: Successfully made call to close driver server
I0505 14:14:43.166376   55658 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.889859914s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-341000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 image load --daemon gcr.io/google-containers/addon-resizer:functional-341000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-341000 image load --daemon gcr.io/google-containers/addon-resizer:functional-341000 --alsologtostderr: (3.345402893s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 image load --daemon gcr.io/google-containers/addon-resizer:functional-341000 --alsologtostderr
2024/05/05 14:14:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-341000 image load --daemon gcr.io/google-containers/addon-resizer:functional-341000 --alsologtostderr: (1.878143639s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.975920773s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-341000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 image load --daemon gcr.io/google-containers/addon-resizer:functional-341000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-341000 image load --daemon gcr.io/google-containers/addon-resizer:functional-341000 --alsologtostderr: (2.626670112s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 image save gcr.io/google-containers/addon-resizer:functional-341000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 image rm gcr.io/google-containers/addon-resizer:functional-341000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-341000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-341000 image save --daemon gcr.io/google-containers/addon-resizer:functional-341000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-341000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-341000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-341000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-341000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (200.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-671000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E0505 14:17:31.469128   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-671000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m19.95459065s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (200.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-671000 -- rollout status deployment/busybox: (4.551996404s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- exec busybox-fc5497c4f-kr2jr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- exec busybox-fc5497c4f-lfn9v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- exec busybox-fc5497c4f-q27t4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- exec busybox-fc5497c4f-kr2jr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- exec busybox-fc5497c4f-lfn9v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- exec busybox-fc5497c4f-q27t4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- exec busybox-fc5497c4f-kr2jr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- exec busybox-fc5497c4f-lfn9v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- exec busybox-fc5497c4f-q27t4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- exec busybox-fc5497c4f-kr2jr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- exec busybox-fc5497c4f-kr2jr -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- exec busybox-fc5497c4f-lfn9v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- exec busybox-fc5497c4f-lfn9v -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- exec busybox-fc5497c4f-q27t4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-671000 -- exec busybox-fc5497c4f-q27t4 -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (64.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-671000 -v=7 --alsologtostderr
E0505 14:18:23.828949   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 14:18:23.834227   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 14:18:23.845858   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 14:18:23.867169   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 14:18:23.908408   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 14:18:23.988887   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 14:18:24.150579   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 14:18:24.472242   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 14:18:25.112706   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 14:18:26.393793   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 14:18:28.954263   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 14:18:34.074993   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 14:18:44.315086   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 14:19:04.796535   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-671000 -v=7 --alsologtostderr: (1m4.204935787s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (64.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-671000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (9.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp testdata/cp-test.txt ha-671000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp ha-671000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile4235302821/001/cp-test_ha-671000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp ha-671000:/home/docker/cp-test.txt ha-671000-m02:/home/docker/cp-test_ha-671000_ha-671000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m02 "sudo cat /home/docker/cp-test_ha-671000_ha-671000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp ha-671000:/home/docker/cp-test.txt ha-671000-m03:/home/docker/cp-test_ha-671000_ha-671000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m03 "sudo cat /home/docker/cp-test_ha-671000_ha-671000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp ha-671000:/home/docker/cp-test.txt ha-671000-m04:/home/docker/cp-test_ha-671000_ha-671000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m04 "sudo cat /home/docker/cp-test_ha-671000_ha-671000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp testdata/cp-test.txt ha-671000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp ha-671000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile4235302821/001/cp-test_ha-671000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp ha-671000-m02:/home/docker/cp-test.txt ha-671000:/home/docker/cp-test_ha-671000-m02_ha-671000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000 "sudo cat /home/docker/cp-test_ha-671000-m02_ha-671000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp ha-671000-m02:/home/docker/cp-test.txt ha-671000-m03:/home/docker/cp-test_ha-671000-m02_ha-671000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m03 "sudo cat /home/docker/cp-test_ha-671000-m02_ha-671000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp ha-671000-m02:/home/docker/cp-test.txt ha-671000-m04:/home/docker/cp-test_ha-671000-m02_ha-671000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m04 "sudo cat /home/docker/cp-test_ha-671000-m02_ha-671000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp testdata/cp-test.txt ha-671000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp ha-671000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile4235302821/001/cp-test_ha-671000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp ha-671000-m03:/home/docker/cp-test.txt ha-671000:/home/docker/cp-test_ha-671000-m03_ha-671000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000 "sudo cat /home/docker/cp-test_ha-671000-m03_ha-671000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp ha-671000-m03:/home/docker/cp-test.txt ha-671000-m02:/home/docker/cp-test_ha-671000-m03_ha-671000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m02 "sudo cat /home/docker/cp-test_ha-671000-m03_ha-671000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp ha-671000-m03:/home/docker/cp-test.txt ha-671000-m04:/home/docker/cp-test_ha-671000-m03_ha-671000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m04 "sudo cat /home/docker/cp-test_ha-671000-m03_ha-671000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp testdata/cp-test.txt ha-671000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile4235302821/001/cp-test_ha-671000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt ha-671000:/home/docker/cp-test_ha-671000-m04_ha-671000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000 "sudo cat /home/docker/cp-test_ha-671000-m04_ha-671000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt ha-671000-m02:/home/docker/cp-test_ha-671000-m04_ha-671000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m02 "sudo cat /home/docker/cp-test_ha-671000-m04_ha-671000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 cp ha-671000-m04:/home/docker/cp-test.txt ha-671000-m03:/home/docker/cp-test_ha-671000-m04_ha-671000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 ssh -n ha-671000-m03 "sudo cat /home/docker/cp-test_ha-671000-m04_ha-671000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (9.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-671000 node stop m02 -v=7 --alsologtostderr: (8.356869107s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-671000 status -v=7 --alsologtostderr: exit status 7 (362.086493ms)

                                                
                                                
-- stdout --
	ha-671000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-671000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-671000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:19:40.740717   56185 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:19:40.741010   56185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:19:40.741016   56185 out.go:304] Setting ErrFile to fd 2...
	I0505 14:19:40.741020   56185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:19:40.741194   56185 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
	I0505 14:19:40.741370   56185 out.go:298] Setting JSON to false
	I0505 14:19:40.741394   56185 mustload.go:65] Loading cluster: ha-671000
	I0505 14:19:40.741439   56185 notify.go:220] Checking for updates...
	I0505 14:19:40.741693   56185 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:19:40.741709   56185 status.go:255] checking status of ha-671000 ...
	I0505 14:19:40.742072   56185 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:19:40.742113   56185 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:19:40.750772   56185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57735
	I0505 14:19:40.751122   56185 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:19:40.751533   56185 main.go:141] libmachine: Using API Version  1
	I0505 14:19:40.751543   56185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:19:40.751792   56185 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:19:40.751904   56185 main.go:141] libmachine: (ha-671000) Calling .GetState
	I0505 14:19:40.751992   56185 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:19:40.752093   56185 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 55694
	I0505 14:19:40.753073   56185 status.go:330] ha-671000 host status = "Running" (err=<nil>)
	I0505 14:19:40.753093   56185 host.go:66] Checking if "ha-671000" exists ...
	I0505 14:19:40.753338   56185 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:19:40.753357   56185 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:19:40.761658   56185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57737
	I0505 14:19:40.762015   56185 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:19:40.762330   56185 main.go:141] libmachine: Using API Version  1
	I0505 14:19:40.762348   56185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:19:40.762591   56185 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:19:40.762706   56185 main.go:141] libmachine: (ha-671000) Calling .GetIP
	I0505 14:19:40.762796   56185 host.go:66] Checking if "ha-671000" exists ...
	I0505 14:19:40.763041   56185 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:19:40.763064   56185 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:19:40.773966   56185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57739
	I0505 14:19:40.774336   56185 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:19:40.774659   56185 main.go:141] libmachine: Using API Version  1
	I0505 14:19:40.774687   56185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:19:40.774896   56185 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:19:40.775005   56185 main.go:141] libmachine: (ha-671000) Calling .DriverName
	I0505 14:19:40.775186   56185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 14:19:40.775218   56185 main.go:141] libmachine: (ha-671000) Calling .GetSSHHostname
	I0505 14:19:40.775300   56185 main.go:141] libmachine: (ha-671000) Calling .GetSSHPort
	I0505 14:19:40.775373   56185 main.go:141] libmachine: (ha-671000) Calling .GetSSHKeyPath
	I0505 14:19:40.775451   56185 main.go:141] libmachine: (ha-671000) Calling .GetSSHUsername
	I0505 14:19:40.775541   56185 sshutil.go:53] new ssh client: &{IP:192.169.0.51 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000/id_rsa Username:docker}
	I0505 14:19:40.805613   56185 ssh_runner.go:195] Run: systemctl --version
	I0505 14:19:40.810559   56185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 14:19:40.826095   56185 kubeconfig.go:125] found "ha-671000" server: "https://192.169.0.254:8443"
	I0505 14:19:40.826121   56185 api_server.go:166] Checking apiserver status ...
	I0505 14:19:40.826161   56185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:19:40.838740   56185 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1938/cgroup
	W0505 14:19:40.847067   56185 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1938/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 14:19:40.847117   56185 ssh_runner.go:195] Run: ls
	I0505 14:19:40.850390   56185 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0505 14:19:40.854477   56185 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0505 14:19:40.854490   56185 status.go:422] ha-671000 apiserver status = Running (err=<nil>)
	I0505 14:19:40.854499   56185 status.go:257] ha-671000 status: &{Name:ha-671000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 14:19:40.854510   56185 status.go:255] checking status of ha-671000-m02 ...
	I0505 14:19:40.854769   56185 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:19:40.854789   56185 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:19:40.863483   56185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57743
	I0505 14:19:40.863818   56185 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:19:40.864160   56185 main.go:141] libmachine: Using API Version  1
	I0505 14:19:40.864173   56185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:19:40.864382   56185 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:19:40.864499   56185 main.go:141] libmachine: (ha-671000-m02) Calling .GetState
	I0505 14:19:40.864581   56185 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:19:40.864652   56185 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid from json: 55719
	I0505 14:19:40.865629   56185 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid 55719 missing from process table
	I0505 14:19:40.865649   56185 status.go:330] ha-671000-m02 host status = "Stopped" (err=<nil>)
	I0505 14:19:40.865657   56185 status.go:343] host is not running, skipping remaining checks
	I0505 14:19:40.865664   56185 status.go:257] ha-671000-m02 status: &{Name:ha-671000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 14:19:40.865682   56185 status.go:255] checking status of ha-671000-m03 ...
	I0505 14:19:40.865946   56185 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:19:40.865968   56185 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:19:40.874636   56185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57745
	I0505 14:19:40.874993   56185 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:19:40.875333   56185 main.go:141] libmachine: Using API Version  1
	I0505 14:19:40.875350   56185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:19:40.875555   56185 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:19:40.875682   56185 main.go:141] libmachine: (ha-671000-m03) Calling .GetState
	I0505 14:19:40.875766   56185 main.go:141] libmachine: (ha-671000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:19:40.875860   56185 main.go:141] libmachine: (ha-671000-m03) DBG | hyperkit pid from json: 55740
	I0505 14:19:40.876876   56185 status.go:330] ha-671000-m03 host status = "Running" (err=<nil>)
	I0505 14:19:40.876885   56185 host.go:66] Checking if "ha-671000-m03" exists ...
	I0505 14:19:40.877153   56185 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:19:40.877174   56185 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:19:40.886157   56185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57747
	I0505 14:19:40.886491   56185 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:19:40.886799   56185 main.go:141] libmachine: Using API Version  1
	I0505 14:19:40.886809   56185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:19:40.887033   56185 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:19:40.887142   56185 main.go:141] libmachine: (ha-671000-m03) Calling .GetIP
	I0505 14:19:40.887227   56185 host.go:66] Checking if "ha-671000-m03" exists ...
	I0505 14:19:40.887510   56185 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:19:40.887533   56185 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:19:40.896096   56185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57749
	I0505 14:19:40.896438   56185 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:19:40.896797   56185 main.go:141] libmachine: Using API Version  1
	I0505 14:19:40.896814   56185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:19:40.897026   56185 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:19:40.897135   56185 main.go:141] libmachine: (ha-671000-m03) Calling .DriverName
	I0505 14:19:40.897261   56185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 14:19:40.897272   56185 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHHostname
	I0505 14:19:40.897353   56185 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHPort
	I0505 14:19:40.897433   56185 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHKeyPath
	I0505 14:19:40.897516   56185 main.go:141] libmachine: (ha-671000-m03) Calling .GetSSHUsername
	I0505 14:19:40.897582   56185 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m03/id_rsa Username:docker}
	I0505 14:19:40.928237   56185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 14:19:40.939193   56185 kubeconfig.go:125] found "ha-671000" server: "https://192.169.0.254:8443"
	I0505 14:19:40.939208   56185 api_server.go:166] Checking apiserver status ...
	I0505 14:19:40.939244   56185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:19:40.949988   56185 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1842/cgroup
	W0505 14:19:40.957343   56185 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1842/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 14:19:40.957402   56185 ssh_runner.go:195] Run: ls
	I0505 14:19:40.960606   56185 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0505 14:19:40.963791   56185 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0505 14:19:40.963803   56185 status.go:422] ha-671000-m03 apiserver status = Running (err=<nil>)
	I0505 14:19:40.963811   56185 status.go:257] ha-671000-m03 status: &{Name:ha-671000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 14:19:40.963832   56185 status.go:255] checking status of ha-671000-m04 ...
	I0505 14:19:40.964087   56185 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:19:40.964107   56185 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:19:40.972823   56185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57753
	I0505 14:19:40.973175   56185 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:19:40.973539   56185 main.go:141] libmachine: Using API Version  1
	I0505 14:19:40.973553   56185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:19:40.973744   56185 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:19:40.973864   56185 main.go:141] libmachine: (ha-671000-m04) Calling .GetState
	I0505 14:19:40.973950   56185 main.go:141] libmachine: (ha-671000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:19:40.974064   56185 main.go:141] libmachine: (ha-671000-m04) DBG | hyperkit pid from json: 55847
	I0505 14:19:40.975061   56185 status.go:330] ha-671000-m04 host status = "Running" (err=<nil>)
	I0505 14:19:40.975070   56185 host.go:66] Checking if "ha-671000-m04" exists ...
	I0505 14:19:40.975330   56185 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:19:40.975354   56185 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:19:40.984041   56185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57755
	I0505 14:19:40.984430   56185 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:19:40.984745   56185 main.go:141] libmachine: Using API Version  1
	I0505 14:19:40.984755   56185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:19:40.984990   56185 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:19:40.985112   56185 main.go:141] libmachine: (ha-671000-m04) Calling .GetIP
	I0505 14:19:40.985203   56185 host.go:66] Checking if "ha-671000-m04" exists ...
	I0505 14:19:40.985459   56185 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:19:40.985486   56185 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:19:40.993825   56185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57757
	I0505 14:19:40.994168   56185 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:19:40.994516   56185 main.go:141] libmachine: Using API Version  1
	I0505 14:19:40.994532   56185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:19:40.994753   56185 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:19:40.994860   56185 main.go:141] libmachine: (ha-671000-m04) Calling .DriverName
	I0505 14:19:40.994979   56185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 14:19:40.994989   56185 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHHostname
	I0505 14:19:40.995080   56185 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHPort
	I0505 14:19:40.995161   56185 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHKeyPath
	I0505 14:19:40.995245   56185 main.go:141] libmachine: (ha-671000-m04) Calling .GetSSHUsername
	I0505 14:19:40.995325   56185 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/ha-671000-m04/id_rsa Username:docker}
	I0505 14:19:41.026265   56185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 14:19:41.037342   56185 status.go:257] ha-671000-m04 status: &{Name:ha-671000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (39.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 node start m02 -v=7 --alsologtostderr
E0505 14:19:45.758064   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-671000 node start m02 -v=7 --alsologtostderr: (39.429394951s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (39.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (91.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-671000 stop -v=7 --alsologtostderr: (1m31.729459187s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-671000 status -v=7 --alsologtostderr: exit status 7 (100.775999ms)

                                                
                                                
-- stdout --
	ha-671000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-671000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-671000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:25:35.064149   56417 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:25:35.064345   56417 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:25:35.064350   56417 out.go:304] Setting ErrFile to fd 2...
	I0505 14:25:35.064354   56417 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:25:35.064533   56417 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
	I0505 14:25:35.064755   56417 out.go:298] Setting JSON to false
	I0505 14:25:35.064779   56417 mustload.go:65] Loading cluster: ha-671000
	I0505 14:25:35.064826   56417 notify.go:220] Checking for updates...
	I0505 14:25:35.065104   56417 config.go:182] Loaded profile config "ha-671000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:25:35.065119   56417 status.go:255] checking status of ha-671000 ...
	I0505 14:25:35.065471   56417 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:25:35.065523   56417 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:25:35.074677   56417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58073
	I0505 14:25:35.075059   56417 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:25:35.075483   56417 main.go:141] libmachine: Using API Version  1
	I0505 14:25:35.075494   56417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:25:35.075703   56417 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:25:35.075910   56417 main.go:141] libmachine: (ha-671000) Calling .GetState
	I0505 14:25:35.076034   56417 main.go:141] libmachine: (ha-671000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:25:35.076081   56417 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid from json: 56275
	I0505 14:25:35.077001   56417 main.go:141] libmachine: (ha-671000) DBG | hyperkit pid 56275 missing from process table
	I0505 14:25:35.077024   56417 status.go:330] ha-671000 host status = "Stopped" (err=<nil>)
	I0505 14:25:35.077033   56417 status.go:343] host is not running, skipping remaining checks
	I0505 14:25:35.077040   56417 status.go:257] ha-671000 status: &{Name:ha-671000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 14:25:35.077062   56417 status.go:255] checking status of ha-671000-m02 ...
	I0505 14:25:35.077320   56417 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:25:35.077347   56417 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:25:35.085705   56417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58076
	I0505 14:25:35.085996   56417 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:25:35.086293   56417 main.go:141] libmachine: Using API Version  1
	I0505 14:25:35.086305   56417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:25:35.086532   56417 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:25:35.086651   56417 main.go:141] libmachine: (ha-671000-m02) Calling .GetState
	I0505 14:25:35.086730   56417 main.go:141] libmachine: (ha-671000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:25:35.086818   56417 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid from json: 56285
	I0505 14:25:35.087705   56417 main.go:141] libmachine: (ha-671000-m02) DBG | hyperkit pid 56285 missing from process table
	I0505 14:25:35.087735   56417 status.go:330] ha-671000-m02 host status = "Stopped" (err=<nil>)
	I0505 14:25:35.087745   56417 status.go:343] host is not running, skipping remaining checks
	I0505 14:25:35.087751   56417 status.go:257] ha-671000-m02 status: &{Name:ha-671000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 14:25:35.087767   56417 status.go:255] checking status of ha-671000-m04 ...
	I0505 14:25:35.088015   56417 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:25:35.088037   56417 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:25:35.096392   56417 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58078
	I0505 14:25:35.096700   56417 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:25:35.097043   56417 main.go:141] libmachine: Using API Version  1
	I0505 14:25:35.097059   56417 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:25:35.097738   56417 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:25:35.098181   56417 main.go:141] libmachine: (ha-671000-m04) Calling .GetState
	I0505 14:25:35.098468   56417 main.go:141] libmachine: (ha-671000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:25:35.098523   56417 main.go:141] libmachine: (ha-671000-m04) DBG | hyperkit pid from json: 55847
	I0505 14:25:35.099445   56417 main.go:141] libmachine: (ha-671000-m04) DBG | hyperkit pid 55847 missing from process table
	I0505 14:25:35.099466   56417 status.go:330] ha-671000-m04 host status = "Stopped" (err=<nil>)
	I0505 14:25:35.099474   56417 status.go:343] host is not running, skipping remaining checks
	I0505 14:25:35.099481   56417 status.go:257] ha-671000-m04 status: &{Name:ha-671000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (91.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (107.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-671000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-671000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : (1m46.619192132s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-671000 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (107.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.35s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (39.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-606000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-606000 --driver=hyperkit : (39.958317955s)
--- PASS: TestImageBuild/serial/Setup (39.96s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.37s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-606000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-606000: (1.366095962s)
--- PASS: TestImageBuild/serial/NormalBuild (1.37s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-606000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.51s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.25s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-606000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.25s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-606000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.22s)

                                                
                                    
x
+
TestJSONOutput/start/Command (94.24s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-413000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-413000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (1m34.237773006s)
--- PASS: TestJSONOutput/start/Command (94.24s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.49s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-413000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.49s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-413000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-413000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-413000 --output=json --user=testUser: (8.352836056s)
--- PASS: TestJSONOutput/stop/Command (8.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.77s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-597000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-597000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (397.536982ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6f1fdf35-184e-4d06-822a-479c3296715d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-597000] minikube v1.33.0 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"99889f64-1bed-41d1-af7b-979b80e74fe3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18602"}}
	{"specversion":"1.0","id":"7a741944-eb56-48e7-a2ee-1f8d12229212","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig"}}
	{"specversion":"1.0","id":"057b20aa-ae9d-4205-aee3-89ffccbcad1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"34ca464c-ed61-4b25-85c1-db9a8461f7e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9fb4f75b-b2b3-4abf-900c-5f67943ce58c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube"}}
	{"specversion":"1.0","id":"ed4daa41-ff27-494f-bf00-facbf033a3cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7cb9e580-4434-4681-b299-3914021a9a49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-597000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-597000
--- PASS: TestErrorJSONOutput (0.77s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (89.84s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-148000 --driver=hyperkit 
E0505 14:37:31.472704   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-148000 --driver=hyperkit : (39.740520496s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-157000 --driver=hyperkit 
E0505 14:38:23.828647   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-157000 --driver=hyperkit : (40.49500442s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-148000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-157000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-157000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-157000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-157000: (3.427440631s)
helpers_test.go:175: Cleaning up "first-148000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-148000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-148000: (5.279725478s)
--- PASS: TestMinikubeProfile (89.84s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-904000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-904000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (20.500483607s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-904000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-904000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-917000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-917000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (18.11505018s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-917000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-917000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.39s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-904000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-904000 --alsologtostderr -v=5: (2.392436602s)
--- PASS: TestMountStart/serial/DeleteFirst (2.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-917000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-917000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.4s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-917000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-917000: (2.396698004s)
--- PASS: TestMountStart/serial/Stop (2.40s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (42.44s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-917000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-917000: (41.434431182s)
--- PASS: TestMountStart/serial/RestartStopped (42.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-917000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-917000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (215.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-766000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0505 14:40:34.537892   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:42:31.467644   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 14:43:23.823907   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-766000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (3m35.404250629s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (215.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-766000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-766000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-766000 -- rollout status deployment/busybox: (4.001337157s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-766000 -- exec busybox-fc5497c4f-558zr -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-766000 -- exec busybox-fc5497c4f-6j57h -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-766000 -- exec busybox-fc5497c4f-558zr -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-766000 -- exec busybox-fc5497c4f-6j57h -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-766000 -- exec busybox-fc5497c4f-558zr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-766000 -- exec busybox-fc5497c4f-6j57h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.75s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-766000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-766000 -- exec busybox-fc5497c4f-558zr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-766000 -- exec busybox-fc5497c4f-558zr -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-766000 -- exec busybox-fc5497c4f-6j57h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-766000 -- exec busybox-fc5497c4f-6j57h -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (37.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-766000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-766000 -v 3 --alsologtostderr: (37.541527614s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (37.86s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-766000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 cp testdata/cp-test.txt multinode-766000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 ssh -n multinode-766000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 cp multinode-766000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile2847248338/001/cp-test_multinode-766000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 ssh -n multinode-766000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 cp multinode-766000:/home/docker/cp-test.txt multinode-766000-m02:/home/docker/cp-test_multinode-766000_multinode-766000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 ssh -n multinode-766000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 ssh -n multinode-766000-m02 "sudo cat /home/docker/cp-test_multinode-766000_multinode-766000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 cp multinode-766000:/home/docker/cp-test.txt multinode-766000-m03:/home/docker/cp-test_multinode-766000_multinode-766000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 ssh -n multinode-766000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 ssh -n multinode-766000-m03 "sudo cat /home/docker/cp-test_multinode-766000_multinode-766000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 cp testdata/cp-test.txt multinode-766000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 ssh -n multinode-766000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 cp multinode-766000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile2847248338/001/cp-test_multinode-766000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 ssh -n multinode-766000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 cp multinode-766000-m02:/home/docker/cp-test.txt multinode-766000:/home/docker/cp-test_multinode-766000-m02_multinode-766000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 ssh -n multinode-766000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 ssh -n multinode-766000 "sudo cat /home/docker/cp-test_multinode-766000-m02_multinode-766000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 cp multinode-766000-m02:/home/docker/cp-test.txt multinode-766000-m03:/home/docker/cp-test_multinode-766000-m02_multinode-766000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 ssh -n multinode-766000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 ssh -n multinode-766000-m03 "sudo cat /home/docker/cp-test_multinode-766000-m02_multinode-766000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 cp testdata/cp-test.txt multinode-766000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 ssh -n multinode-766000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 cp multinode-766000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile2847248338/001/cp-test_multinode-766000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 ssh -n multinode-766000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 cp multinode-766000-m03:/home/docker/cp-test.txt multinode-766000:/home/docker/cp-test_multinode-766000-m03_multinode-766000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 ssh -n multinode-766000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 ssh -n multinode-766000 "sudo cat /home/docker/cp-test_multinode-766000-m03_multinode-766000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 cp multinode-766000-m03:/home/docker/cp-test.txt multinode-766000-m02:/home/docker/cp-test_multinode-766000-m03_multinode-766000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 ssh -n multinode-766000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 ssh -n multinode-766000-m02 "sudo cat /home/docker/cp-test_multinode-766000-m03_multinode-766000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-766000 node stop m03: (2.350628719s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-766000 status: exit status 7 (257.254895ms)

                                                
                                                
-- stdout --
	multinode-766000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-766000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-766000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-766000 status --alsologtostderr: exit status 7 (260.386052ms)

                                                
                                                
-- stdout --
	multinode-766000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-766000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-766000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:44:55.316710   57413 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:44:55.316882   57413 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:44:55.316888   57413 out.go:304] Setting ErrFile to fd 2...
	I0505 14:44:55.316891   57413 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:44:55.317081   57413 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
	I0505 14:44:55.317277   57413 out.go:298] Setting JSON to false
	I0505 14:44:55.317302   57413 mustload.go:65] Loading cluster: multinode-766000
	I0505 14:44:55.317351   57413 notify.go:220] Checking for updates...
	I0505 14:44:55.317636   57413 config.go:182] Loaded profile config "multinode-766000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:44:55.317651   57413 status.go:255] checking status of multinode-766000 ...
	I0505 14:44:55.318007   57413 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:44:55.318053   57413 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:44:55.327997   57413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59171
	I0505 14:44:55.328376   57413 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:44:55.328796   57413 main.go:141] libmachine: Using API Version  1
	I0505 14:44:55.328812   57413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:44:55.329024   57413 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:44:55.329132   57413 main.go:141] libmachine: (multinode-766000) Calling .GetState
	I0505 14:44:55.329220   57413 main.go:141] libmachine: (multinode-766000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:44:55.329292   57413 main.go:141] libmachine: (multinode-766000) DBG | hyperkit pid from json: 57064
	I0505 14:44:55.330486   57413 status.go:330] multinode-766000 host status = "Running" (err=<nil>)
	I0505 14:44:55.330509   57413 host.go:66] Checking if "multinode-766000" exists ...
	I0505 14:44:55.330738   57413 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:44:55.330759   57413 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:44:55.339474   57413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59173
	I0505 14:44:55.339835   57413 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:44:55.340227   57413 main.go:141] libmachine: Using API Version  1
	I0505 14:44:55.340250   57413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:44:55.340530   57413 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:44:55.340675   57413 main.go:141] libmachine: (multinode-766000) Calling .GetIP
	I0505 14:44:55.340786   57413 host.go:66] Checking if "multinode-766000" exists ...
	I0505 14:44:55.341808   57413 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:44:55.341886   57413 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:44:55.351639   57413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59175
	I0505 14:44:55.351954   57413 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:44:55.352306   57413 main.go:141] libmachine: Using API Version  1
	I0505 14:44:55.352340   57413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:44:55.352535   57413 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:44:55.352644   57413 main.go:141] libmachine: (multinode-766000) Calling .DriverName
	I0505 14:44:55.352789   57413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 14:44:55.352808   57413 main.go:141] libmachine: (multinode-766000) Calling .GetSSHHostname
	I0505 14:44:55.352891   57413 main.go:141] libmachine: (multinode-766000) Calling .GetSSHPort
	I0505 14:44:55.352972   57413 main.go:141] libmachine: (multinode-766000) Calling .GetSSHKeyPath
	I0505 14:44:55.353052   57413 main.go:141] libmachine: (multinode-766000) Calling .GetSSHUsername
	I0505 14:44:55.353135   57413 sshutil.go:53] new ssh client: &{IP:192.169.0.62 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/multinode-766000/id_rsa Username:docker}
	I0505 14:44:55.382837   57413 ssh_runner.go:195] Run: systemctl --version
	I0505 14:44:55.387257   57413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 14:44:55.398331   57413 kubeconfig.go:125] found "multinode-766000" server: "https://192.169.0.62:8443"
	I0505 14:44:55.398354   57413 api_server.go:166] Checking apiserver status ...
	I0505 14:44:55.398390   57413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0505 14:44:55.409256   57413 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1801/cgroup
	W0505 14:44:55.416334   57413 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1801/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0505 14:44:55.416380   57413 ssh_runner.go:195] Run: ls
	I0505 14:44:55.419720   57413 api_server.go:253] Checking apiserver healthz at https://192.169.0.62:8443/healthz ...
	I0505 14:44:55.422802   57413 api_server.go:279] https://192.169.0.62:8443/healthz returned 200:
	ok
	I0505 14:44:55.422814   57413 status.go:422] multinode-766000 apiserver status = Running (err=<nil>)
	I0505 14:44:55.422823   57413 status.go:257] multinode-766000 status: &{Name:multinode-766000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 14:44:55.422834   57413 status.go:255] checking status of multinode-766000-m02 ...
	I0505 14:44:55.423077   57413 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:44:55.423097   57413 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:44:55.431923   57413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59179
	I0505 14:44:55.432274   57413 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:44:55.432662   57413 main.go:141] libmachine: Using API Version  1
	I0505 14:44:55.432677   57413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:44:55.432870   57413 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:44:55.432992   57413 main.go:141] libmachine: (multinode-766000-m02) Calling .GetState
	I0505 14:44:55.433078   57413 main.go:141] libmachine: (multinode-766000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:44:55.433146   57413 main.go:141] libmachine: (multinode-766000-m02) DBG | hyperkit pid from json: 57127
	I0505 14:44:55.434338   57413 status.go:330] multinode-766000-m02 host status = "Running" (err=<nil>)
	I0505 14:44:55.434349   57413 host.go:66] Checking if "multinode-766000-m02" exists ...
	I0505 14:44:55.434599   57413 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:44:55.434623   57413 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:44:55.443239   57413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59181
	I0505 14:44:55.443595   57413 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:44:55.443940   57413 main.go:141] libmachine: Using API Version  1
	I0505 14:44:55.443957   57413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:44:55.444159   57413 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:44:55.444260   57413 main.go:141] libmachine: (multinode-766000-m02) Calling .GetIP
	I0505 14:44:55.444345   57413 host.go:66] Checking if "multinode-766000-m02" exists ...
	I0505 14:44:55.444589   57413 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:44:55.444612   57413 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:44:55.453238   57413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59183
	I0505 14:44:55.453594   57413 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:44:55.453903   57413 main.go:141] libmachine: Using API Version  1
	I0505 14:44:55.453912   57413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:44:55.454147   57413 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:44:55.454272   57413 main.go:141] libmachine: (multinode-766000-m02) Calling .DriverName
	I0505 14:44:55.454419   57413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0505 14:44:55.454434   57413 main.go:141] libmachine: (multinode-766000-m02) Calling .GetSSHHostname
	I0505 14:44:55.454517   57413 main.go:141] libmachine: (multinode-766000-m02) Calling .GetSSHPort
	I0505 14:44:55.454605   57413 main.go:141] libmachine: (multinode-766000-m02) Calling .GetSSHKeyPath
	I0505 14:44:55.454697   57413 main.go:141] libmachine: (multinode-766000-m02) Calling .GetSSHUsername
	I0505 14:44:55.454779   57413 sshutil.go:53] new ssh client: &{IP:192.169.0.63 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18602-53665/.minikube/machines/multinode-766000-m02/id_rsa Username:docker}
	I0505 14:44:55.489798   57413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0505 14:44:55.500003   57413 status.go:257] multinode-766000-m02 status: &{Name:multinode-766000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0505 14:44:55.500026   57413 status.go:255] checking status of multinode-766000-m03 ...
	I0505 14:44:55.500316   57413 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:44:55.500341   57413 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:44:55.509317   57413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59186
	I0505 14:44:55.509693   57413 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:44:55.510049   57413 main.go:141] libmachine: Using API Version  1
	I0505 14:44:55.510063   57413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:44:55.510277   57413 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:44:55.510388   57413 main.go:141] libmachine: (multinode-766000-m03) Calling .GetState
	I0505 14:44:55.510481   57413 main.go:141] libmachine: (multinode-766000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:44:55.510546   57413 main.go:141] libmachine: (multinode-766000-m03) DBG | hyperkit pid from json: 57201
	I0505 14:44:55.511714   57413 main.go:141] libmachine: (multinode-766000-m03) DBG | hyperkit pid 57201 missing from process table
	I0505 14:44:55.511761   57413 status.go:330] multinode-766000-m03 host status = "Stopped" (err=<nil>)
	I0505 14:44:55.511771   57413 status.go:343] host is not running, skipping remaining checks
	I0505 14:44:55.511778   57413 status.go:257] multinode-766000-m03 status: &{Name:multinode-766000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.87s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (26.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-766000 node start m03 -v=7 --alsologtostderr: (26.31336369s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (26.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (168.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-766000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-766000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-766000: (18.841946141s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-766000 --wait=true -v=8 --alsologtostderr
E0505 14:47:31.461683   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-766000 --wait=true -v=8 --alsologtostderr: (2m29.254818777s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-766000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (168.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-766000 node delete m03: (3.062823018s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 stop
E0505 14:48:23.820216   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-766000 stop: (16.624971496s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-766000 status: exit status 7 (89.792543ms)

                                                
                                                
-- stdout --
	multinode-766000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-766000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-766000 status --alsologtostderr: exit status 7 (88.195449ms)

                                                
                                                
-- stdout --
	multinode-766000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-766000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0505 14:48:30.614173   57592 out.go:291] Setting OutFile to fd 1 ...
	I0505 14:48:30.614345   57592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:48:30.614351   57592 out.go:304] Setting ErrFile to fd 2...
	I0505 14:48:30.614355   57592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0505 14:48:30.614527   57592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18602-53665/.minikube/bin
	I0505 14:48:30.614712   57592 out.go:298] Setting JSON to false
	I0505 14:48:30.614737   57592 mustload.go:65] Loading cluster: multinode-766000
	I0505 14:48:30.614778   57592 notify.go:220] Checking for updates...
	I0505 14:48:30.615051   57592 config.go:182] Loaded profile config "multinode-766000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0505 14:48:30.615066   57592 status.go:255] checking status of multinode-766000 ...
	I0505 14:48:30.615411   57592 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:48:30.615473   57592 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:48:30.624315   57592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59416
	I0505 14:48:30.624699   57592 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:48:30.625114   57592 main.go:141] libmachine: Using API Version  1
	I0505 14:48:30.625123   57592 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:48:30.625336   57592 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:48:30.625454   57592 main.go:141] libmachine: (multinode-766000) Calling .GetState
	I0505 14:48:30.625550   57592 main.go:141] libmachine: (multinode-766000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:48:30.625624   57592 main.go:141] libmachine: (multinode-766000) DBG | hyperkit pid from json: 57483
	I0505 14:48:30.626571   57592 main.go:141] libmachine: (multinode-766000) DBG | hyperkit pid 57483 missing from process table
	I0505 14:48:30.626579   57592 status.go:330] multinode-766000 host status = "Stopped" (err=<nil>)
	I0505 14:48:30.626586   57592 status.go:343] host is not running, skipping remaining checks
	I0505 14:48:30.626593   57592 status.go:257] multinode-766000 status: &{Name:multinode-766000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0505 14:48:30.626613   57592 status.go:255] checking status of multinode-766000-m02 ...
	I0505 14:48:30.626875   57592 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0505 14:48:30.626899   57592 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0505 14:48:30.635262   57592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59418
	I0505 14:48:30.635596   57592 main.go:141] libmachine: () Calling .GetVersion
	I0505 14:48:30.635936   57592 main.go:141] libmachine: Using API Version  1
	I0505 14:48:30.635951   57592 main.go:141] libmachine: () Calling .SetConfigRaw
	I0505 14:48:30.636170   57592 main.go:141] libmachine: () Calling .GetMachineName
	I0505 14:48:30.636288   57592 main.go:141] libmachine: (multinode-766000-m02) Calling .GetState
	I0505 14:48:30.636377   57592 main.go:141] libmachine: (multinode-766000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0505 14:48:30.636469   57592 main.go:141] libmachine: (multinode-766000-m02) DBG | hyperkit pid from json: 57499
	I0505 14:48:30.637354   57592 main.go:141] libmachine: (multinode-766000-m02) DBG | hyperkit pid 57499 missing from process table
	I0505 14:48:30.637371   57592 status.go:330] multinode-766000-m02 host status = "Stopped" (err=<nil>)
	I0505 14:48:30.637377   57592 status.go:343] host is not running, skipping remaining checks
	I0505 14:48:30.637384   57592 status.go:257] multinode-766000-m02 status: &{Name:multinode-766000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (72.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-766000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-766000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m11.959696765s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-766000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (72.31s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-766000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-766000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-766000-m02 --driver=hyperkit : exit status 14 (459.224189ms)

                                                
                                                
-- stdout --
	* [multinode-766000-m02] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-766000-m02' is duplicated with machine name 'multinode-766000-m02' in profile 'multinode-766000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-766000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-766000-m03 --driver=hyperkit : (41.333730899s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-766000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-766000: exit status 80 (317.140664ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-766000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-766000-m03 already exists in multinode-766000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-766000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-766000-m03: (3.48218846s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.66s)

                                                
                                    
x
+
TestPreload (138.7s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-377000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E0505 14:51:26.872729   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-377000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m16.754924111s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-377000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-377000 image pull gcr.io/k8s-minikube/busybox: (1.265939148s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-377000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-377000: (8.399384554s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-377000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E0505 14:52:31.456896   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-377000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (46.83580447s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-377000 image list
helpers_test.go:175: Cleaning up "test-preload-377000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-377000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-377000: (5.281455866s)
--- PASS: TestPreload (138.70s)

                                                
                                    
x
+
TestScheduledStopUnix (108.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-970000 --memory=2048 --driver=hyperkit 
E0505 14:53:23.813635   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-970000 --memory=2048 --driver=hyperkit : (36.823291968s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-970000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-970000 -n scheduled-stop-970000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-970000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-970000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-970000 -n scheduled-stop-970000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-970000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-970000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-970000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-970000: exit status 7 (79.974147ms)

                                                
                                                
-- stdout --
	scheduled-stop-970000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-970000 -n scheduled-stop-970000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-970000 -n scheduled-stop-970000: exit status 7 (77.37947ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-970000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-970000
--- PASS: TestScheduledStopUnix (108.45s)

                                                
                                    
x
+
TestSkaffold (229.5s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe526482655 version
skaffold_test.go:59: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe526482655 version: (1.481819764s)
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-360000 --memory=2600 --driver=hyperkit 
E0505 14:57:14.520462   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-360000 --memory=2600 --driver=hyperkit : (2m33.419217526s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe526482655 run --minikube-profile skaffold-360000 --kube-context skaffold-360000 --status-check=true --port-forward=false --interactive=false
E0505 14:57:31.450041   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe526482655 run --minikube-profile skaffold-360000 --kube-context skaffold-360000 --status-check=true --port-forward=false --interactive=false: (56.645409282s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-556d45865f-xdwzg" [e408a9b3-ec74-4baf-9ae2-b9f9d1ba0cc5] Running
E0505 14:58:23.808775   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003266949s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-856dc8dd7-dptwq" [b7e05235-a753-49f6-9785-6124262adbe8] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003776469s
helpers_test.go:175: Cleaning up "skaffold-360000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-360000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-360000: (5.290452569s)
--- PASS: TestSkaffold (229.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (80.97s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.3081851904 start -p running-upgrade-501000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.3081851904 start -p running-upgrade-501000 --memory=2200 --vm-driver=hyperkit : (40.055150079s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-501000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-501000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (34.390453536s)
helpers_test.go:175: Cleaning up "running-upgrade-501000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-501000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-501000: (5.368934671s)
--- PASS: TestRunningBinaryUpgrade (80.97s)

                                                
                                    
x
+
TestKubernetesUpgrade (134.17s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-356000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-356000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (1m3.547965328s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-356000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-356000: (8.395695064s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-356000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-356000 status --format={{.Host}}: exit status 7 (76.302439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-356000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-356000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperkit : (34.028315953s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-356000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-356000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-356000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (549.455927ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-356000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-356000
	    minikube start -p kubernetes-upgrade-356000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3560002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-356000 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-356000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-356000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperkit : (23.918847007s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-356000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-356000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-356000: (3.601842872s)
--- PASS: TestKubernetesUpgrade (134.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (99.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2761796572 start -p stopped-upgrade-228000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2761796572 start -p stopped-upgrade-228000 --memory=2200 --vm-driver=hyperkit : (56.216238755s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2761796572 -p stopped-upgrade-228000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2761796572 -p stopped-upgrade-228000 stop: (8.237262515s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-228000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-228000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (35.28859738s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (99.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-228000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-228000: (2.590614278s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.59s)

                                                
                                    
x
+
TestPause/serial/Start (210.83s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-645000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-645000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (3m30.827079447s)
--- PASS: TestPause/serial/Start (210.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-848000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-848000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (475.604858ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-848000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18602
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18602-53665/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-848000 --driver=hyperkit 
E0505 15:02:31.537416   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-848000 --driver=hyperkit : (42.643411293s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-848000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-848000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-848000 --no-kubernetes --driver=hyperkit : (14.792017261s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-848000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-848000 status -o json: exit status 2 (160.011799ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-848000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-848000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-848000: (2.439542722s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (21.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-848000 --no-kubernetes --driver=hyperkit 
E0505 15:03:19.433005   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
E0505 15:03:19.438771   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
E0505 15:03:19.450291   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
E0505 15:03:19.472420   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
E0505 15:03:19.512598   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
E0505 15:03:19.594603   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
E0505 15:03:19.755702   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
E0505 15:03:20.077256   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
E0505 15:03:20.718810   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
E0505 15:03:22.000725   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
E0505 15:03:23.895565   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 15:03:24.561157   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
E0505 15:03:29.682719   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-848000 --no-kubernetes --driver=hyperkit : (21.030230485s)
--- PASS: TestNoKubernetes/serial/Start (21.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-848000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-848000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (143.354382ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (8.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-848000
E0505 15:03:39.923139   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-848000: (8.39178978s)
--- PASS: TestNoKubernetes/serial/Stop (8.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-848000 --driver=hyperkit 
E0505 15:04:00.403879   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-848000 --driver=hyperkit : (19.32735144s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-848000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-848000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (142.68762ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.14s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (4.38s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0 on darwin
- MINIKUBE_LOCATION=18602
- KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3759447904/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3759447904/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3759447904/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3759447904/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (4.38s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.15s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0 on darwin
- MINIKUBE_LOCATION=18602
- KUBECONFIG=/Users/jenkins/minikube-integration/18602-53665/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2917939696/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2917939696/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2917939696/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2917939696/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (93.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-296000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-296000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (1m33.245934568s)
--- PASS: TestNetworkPlugins/group/auto/Start (93.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-296000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-296000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-r5dkj" [3d214943-0a84-485b-a152-ed92938a30d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-r5dkj" [3d214943-0a84-485b-a152-ed92938a30d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004012394s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-296000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-296000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-296000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (63.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-296000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-296000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (1m3.928473714s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (63.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-296000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
E0505 15:12:31.533991   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-296000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m17.369965942s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-gq5jn" [2b544d24-4efd-4373-9889-891877a7030b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.002672112s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-296000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-296000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-td8wc" [20a491b0-61fe-49e0-8586-a1b38ba1c0b0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0505 15:13:19.427358   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-td8wc" [20a491b0-61fe-49e0-8586-a1b38ba1c0b0] Running
E0505 15:13:23.891285   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.002546379s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-296000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-296000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-296000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rzm7b" [7c7827f7-d958-4e4d-b70e-bd2623da9c8e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005013772s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-296000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-296000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-nvf4z" [42d29944-15c2-4e9b-b09d-ac515bc922a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-nvf4z" [42d29944-15c2-4e9b-b09d-ac515bc922a0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003250471s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-296000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-296000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (1m3.337550374s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-296000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-296000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-296000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (54.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-296000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-296000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (54.323754415s)
--- PASS: TestNetworkPlugins/group/false/Start (54.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-296000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-296000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-v82l7" [258a1b7b-ce75-4dce-87cd-110c4423dc8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-v82l7" [258a1b7b-ce75-4dce-87cd-110c4423dc8f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003335378s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-296000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-296000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-296000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-296000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-296000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-dmxkt" [ad079136-7705-4468-a17a-aa8e6cde7aaa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-dmxkt" [ad079136-7705-4468-a17a-aa8e6cde7aaa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.003636157s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (172.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-296000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-296000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (2m52.937909489s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (172.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-296000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-296000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-296000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-296000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
E0505 15:16:33.777835   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
E0505 15:16:33.784341   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
E0505 15:16:33.794508   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
E0505 15:16:33.815619   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
E0505 15:16:33.857352   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
E0505 15:16:33.937904   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
E0505 15:16:34.098497   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
E0505 15:16:34.419184   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
E0505 15:16:35.060245   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
E0505 15:16:36.341061   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-296000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (1m2.439757225s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5mvwd" [112db3e9-ab36-47d3-891d-508cc2fec46d] Running
E0505 15:16:38.901408   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003768976s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-296000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-296000 replace --force -f testdata/netcat-deployment.yaml
E0505 15:16:44.021584   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-lx7s6" [b3a59ece-905e-4150-813b-084eb2ab9a3c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-lx7s6" [b3a59ece-905e-4150-813b-084eb2ab9a3c] Running
E0505 15:16:54.261949   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00441151s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-296000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-296000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-296000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (172.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-296000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
E0505 15:17:14.744036   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
E0505 15:17:31.532310   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 15:17:55.705374   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
E0505 15:18:06.670920   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
E0505 15:18:06.676502   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
E0505 15:18:06.687335   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
E0505 15:18:06.707797   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
E0505 15:18:06.748220   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
E0505 15:18:06.829795   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
E0505 15:18:06.990294   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
E0505 15:18:07.312303   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
E0505 15:18:07.952610   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-296000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (2m52.071571077s)
--- PASS: TestNetworkPlugins/group/bridge/Start (172.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-296000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-296000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cnq4t" [0a01d702-bc55-4f4a-aaeb-6e1cc42d2bb0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0505 15:18:09.234450   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
E0505 15:18:11.794638   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-cnq4t" [0a01d702-bc55-4f4a-aaeb-6e1cc42d2bb0] Running
E0505 15:18:16.914846   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
E0505 15:18:19.424210   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004385353s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-296000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-296000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-296000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (60.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-296000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
E0505 15:18:39.345982   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/calico-296000/client.crt: no such file or directory
E0505 15:18:44.466091   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/calico-296000/client.crt: no such file or directory
E0505 15:18:47.638195   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
E0505 15:18:54.707837   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/calico-296000/client.crt: no such file or directory
E0505 15:19:15.189841   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/calico-296000/client.crt: no such file or directory
E0505 15:19:17.625895   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
E0505 15:19:28.598608   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-296000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (1m0.812809853s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (60.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-296000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-296000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-grsd9" [45ed76f0-de47-4060-85c2-90092c78df9c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0505 15:19:42.482465   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-grsd9" [45ed76f0-de47-4060-85c2-90092c78df9c] Running
E0505 15:19:46.535554   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/custom-flannel-296000/client.crt: no such file or directory
E0505 15:19:46.541963   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/custom-flannel-296000/client.crt: no such file or directory
E0505 15:19:46.553211   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/custom-flannel-296000/client.crt: no such file or directory
E0505 15:19:46.574221   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/custom-flannel-296000/client.crt: no such file or directory
E0505 15:19:46.614839   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/custom-flannel-296000/client.crt: no such file or directory
E0505 15:19:46.694963   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/custom-flannel-296000/client.crt: no such file or directory
E0505 15:19:46.855342   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/custom-flannel-296000/client.crt: no such file or directory
E0505 15:19:47.175409   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/custom-flannel-296000/client.crt: no such file or directory
E0505 15:19:47.816134   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/custom-flannel-296000/client.crt: no such file or directory
E0505 15:19:49.096492   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/custom-flannel-296000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.005122295s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-296000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-296000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-296000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-296000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-296000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-x9krk" [6ac14653-d3b8-47e9-9cb5-8d77315c4d62] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0505 15:20:05.684913   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
E0505 15:20:05.691153   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
E0505 15:20:05.702999   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
E0505 15:20:05.723782   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
E0505 15:20:05.764935   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
E0505 15:20:05.846506   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
E0505 15:20:06.006937   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
E0505 15:20:06.329369   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
E0505 15:20:06.970862   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
E0505 15:20:07.047558   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/custom-flannel-296000/client.crt: no such file or directory
E0505 15:20:08.252366   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-x9krk" [6ac14653-d3b8-47e9-9cb5-8d77315c4d62] Running
E0505 15:20:10.816425   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.002565518s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (120.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-872000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-872000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (2m0.379573135s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (120.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-296000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-296000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-296000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0505 15:20:15.942165   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (58.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-055000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.30.0
E0505 15:20:46.673883   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
E0505 15:20:50.562334   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
E0505 15:21:08.504387   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/custom-flannel-296000/client.crt: no such file or directory
E0505 15:21:18.113805   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/calico-296000/client.crt: no such file or directory
E0505 15:21:27.635264   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-055000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.30.0: (58.753401917s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (58.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-055000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [78cdb36f-f33f-4772-b320-6aab02b04c4d] Pending
helpers_test.go:344: "busybox" [78cdb36f-f33f-4772-b320-6aab02b04c4d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0505 15:21:33.817628   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [78cdb36f-f33f-4772-b320-6aab02b04c4d] Running
E0505 15:21:37.766455   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
E0505 15:21:37.772476   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
E0505 15:21:37.782721   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
E0505 15:21:37.802965   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
E0505 15:21:37.844046   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
E0505 15:21:37.925164   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
E0505 15:21:38.085745   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
E0505 15:21:38.406333   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
E0505 15:21:39.046782   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
E0505 15:21:40.327733   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003371555s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-055000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-055000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-055000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-055000 --alsologtostderr -v=3
E0505 15:21:42.888195   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
E0505 15:21:48.008555   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-055000 --alsologtostderr -v=3: (8.467931444s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-055000 -n no-preload-055000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-055000 -n no-preload-055000: exit status 7 (77.026942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-055000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (299.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-055000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.30.0
E0505 15:21:58.249113   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
E0505 15:22:01.509826   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-055000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.30.0: (4m58.800684129s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-055000 -n no-preload-055000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (299.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-872000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [57a439a5-c34f-4c7f-adf7-9fcf9c46d2b2] Pending
helpers_test.go:344: "busybox" [57a439a5-c34f-4c7f-adf7-9fcf9c46d2b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [57a439a5-c34f-4c7f-adf7-9fcf9c46d2b2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.004259654s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-872000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-872000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-872000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-872000 --alsologtostderr -v=3
E0505 15:22:18.730025   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-872000 --alsologtostderr -v=3: (8.41917332s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-872000 -n old-k8s-version-872000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-872000 -n old-k8s-version-872000: exit status 7 (77.061264ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-872000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (390.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-872000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0505 15:22:30.424302   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/custom-flannel-296000/client.crt: no such file or directory
E0505 15:22:31.572137   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 15:22:49.556502   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
E0505 15:22:59.690615   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
E0505 15:23:06.712071   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
E0505 15:23:09.138151   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
E0505 15:23:09.144127   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
E0505 15:23:09.154295   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
E0505 15:23:09.176445   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
E0505 15:23:09.216714   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
E0505 15:23:09.297987   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
E0505 15:23:09.459946   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
E0505 15:23:09.780726   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
E0505 15:23:10.421379   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
E0505 15:23:11.701497   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
E0505 15:23:14.262172   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
E0505 15:23:19.383693   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
E0505 15:23:19.467228   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
E0505 15:23:23.930184   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 15:23:29.624469   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
E0505 15:23:34.263125   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/calico-296000/client.crt: no such file or directory
E0505 15:23:34.402633   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
E0505 15:23:50.104523   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
E0505 15:24:01.954375   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/calico-296000/client.crt: no such file or directory
E0505 15:24:21.611894   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
E0505 15:24:31.064873   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
E0505 15:24:40.207859   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
E0505 15:24:40.213977   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
E0505 15:24:40.225385   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
E0505 15:24:40.246469   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
E0505 15:24:40.287816   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
E0505 15:24:40.369317   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
E0505 15:24:40.530376   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
E0505 15:24:40.852365   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
E0505 15:24:41.493733   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
E0505 15:24:42.774020   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
E0505 15:24:45.334148   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
E0505 15:24:46.577209   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/custom-flannel-296000/client.crt: no such file or directory
E0505 15:24:46.989346   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
E0505 15:24:50.455050   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
E0505 15:25:00.697204   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
E0505 15:25:05.625454   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:25:05.630953   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:25:05.642212   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:25:05.663223   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:25:05.701772   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
E0505 15:25:05.703727   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:25:05.785209   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:25:05.945877   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:25:06.267968   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:25:06.909530   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:25:08.190196   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:25:10.751213   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:25:14.263663   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/custom-flannel-296000/client.crt: no such file or directory
E0505 15:25:15.872448   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:25:21.178683   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
E0505 15:25:26.112860   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:25:33.396327   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
E0505 15:25:46.594813   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:25:52.985764   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
E0505 15:26:02.139071   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
E0505 15:26:27.554784   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:26:33.815780   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
E0505 15:26:37.763916   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-872000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (6m30.174382472s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-872000 -n old-k8s-version-872000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (390.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-k9mgw" [64533ab2-ef1e-4c02-9b17-645f0ab454c3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-k9mgw" [64533ab2-ef1e-4c02-9b17-645f0ab454c3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004029446s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-k9mgw" [64533ab2-ef1e-4c02-9b17-645f0ab454c3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00409486s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-055000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-055000 image list --format=json
E0505 15:27:05.451116   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-055000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-055000 -n no-preload-055000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-055000 -n no-preload-055000: exit status 2 (170.752657ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-055000 -n no-preload-055000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-055000 -n no-preload-055000: exit status 2 (168.71778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-055000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-055000 -n no-preload-055000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-055000 -n no-preload-055000
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (55.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-795000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.0
E0505 15:27:24.059393   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
E0505 15:27:31.569921   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 15:27:49.475749   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:28:06.709473   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-795000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.0: (55.217603659s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (55.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-795000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [009eb960-7558-46cb-b666-ba8665ff40c6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0505 15:28:09.136531   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [009eb960-7558-46cb-b666-ba8665ff40c6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004513536s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-795000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-795000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-795000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-795000 --alsologtostderr -v=3
E0505 15:28:19.464383   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
E0505 15:28:23.929932   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-795000 --alsologtostderr -v=3: (8.529686641s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-795000 -n embed-certs-795000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-795000 -n embed-certs-795000: exit status 7 (76.408714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-795000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (294.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-795000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.0
E0505 15:28:34.259285   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/calico-296000/client.crt: no such file or directory
E0505 15:28:36.825149   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-795000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.0: (4m54.169752718s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-795000 -n embed-certs-795000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (294.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7sksb" [21182f38-b770-42b0-85c4-6858c2477dc1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002626234s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7sksb" [21182f38-b770-42b0-85c4-6858c2477dc1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002773741s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-872000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-872000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-872000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-872000 -n old-k8s-version-872000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-872000 -n old-k8s-version-872000: exit status 2 (176.051053ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-872000 -n old-k8s-version-872000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-872000 -n old-k8s-version-872000: exit status 2 (182.91835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-872000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-872000 -n old-k8s-version-872000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-872000 -n old-k8s-version-872000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-482000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.0
E0505 15:29:40.206251   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
E0505 15:29:46.576028   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/custom-flannel-296000/client.crt: no such file or directory
E0505 15:30:05.623442   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:30:05.700081   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
E0505 15:30:07.898325   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-482000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.0: (1m3.935314825s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-482000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [df286cd9-d781-42b1-b891-8fbb80c66e71] Pending
helpers_test.go:344: "busybox" [df286cd9-d781-42b1-b891-8fbb80c66e71] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [df286cd9-d781-42b1-b891-8fbb80c66e71] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003495287s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-482000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-482000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-482000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-482000 --alsologtostderr -v=3
E0505 15:30:33.316093   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:30:34.639536   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-482000 --alsologtostderr -v=3: (8.434388055s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-482000 -n default-k8s-diff-port-482000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-482000 -n default-k8s-diff-port-482000: exit status 7 (75.492949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-482000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (299.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-482000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.0
E0505 15:31:32.632947   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/no-preload-055000/client.crt: no such file or directory
E0505 15:31:32.638917   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/no-preload-055000/client.crt: no such file or directory
E0505 15:31:32.650097   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/no-preload-055000/client.crt: no such file or directory
E0505 15:31:32.670256   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/no-preload-055000/client.crt: no such file or directory
E0505 15:31:32.711470   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/no-preload-055000/client.crt: no such file or directory
E0505 15:31:32.792403   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/no-preload-055000/client.crt: no such file or directory
E0505 15:31:32.952749   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/no-preload-055000/client.crt: no such file or directory
E0505 15:31:33.274408   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/no-preload-055000/client.crt: no such file or directory
E0505 15:31:33.813254   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
E0505 15:31:33.914660   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/no-preload-055000/client.crt: no such file or directory
E0505 15:31:35.195451   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/no-preload-055000/client.crt: no such file or directory
E0505 15:31:37.755900   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/no-preload-055000/client.crt: no such file or directory
E0505 15:31:37.761041   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/flannel-296000/client.crt: no such file or directory
E0505 15:31:42.877006   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/no-preload-055000/client.crt: no such file or directory
E0505 15:31:53.118269   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/no-preload-055000/client.crt: no such file or directory
E0505 15:32:10.058588   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/old-k8s-version-872000/client.crt: no such file or directory
E0505 15:32:10.064366   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/old-k8s-version-872000/client.crt: no such file or directory
E0505 15:32:10.074801   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/old-k8s-version-872000/client.crt: no such file or directory
E0505 15:32:10.095836   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/old-k8s-version-872000/client.crt: no such file or directory
E0505 15:32:10.136558   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/old-k8s-version-872000/client.crt: no such file or directory
E0505 15:32:10.217728   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/old-k8s-version-872000/client.crt: no such file or directory
E0505 15:32:10.378948   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/old-k8s-version-872000/client.crt: no such file or directory
E0505 15:32:10.699746   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/old-k8s-version-872000/client.crt: no such file or directory
E0505 15:32:11.341139   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/old-k8s-version-872000/client.crt: no such file or directory
E0505 15:32:12.622571   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/old-k8s-version-872000/client.crt: no such file or directory
E0505 15:32:13.599251   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/no-preload-055000/client.crt: no such file or directory
E0505 15:32:15.184103   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/old-k8s-version-872000/client.crt: no such file or directory
E0505 15:32:20.305686   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/old-k8s-version-872000/client.crt: no such file or directory
E0505 15:32:30.546437   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/old-k8s-version-872000/client.crt: no such file or directory
E0505 15:32:31.567705   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/addons-099000/client.crt: no such file or directory
E0505 15:32:51.027055   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/old-k8s-version-872000/client.crt: no such file or directory
E0505 15:32:54.560039   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/no-preload-055000/client.crt: no such file or directory
E0505 15:32:56.867052   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/auto-296000/client.crt: no such file or directory
E0505 15:33:06.707571   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
E0505 15:33:09.134732   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/enable-default-cni-296000/client.crt: no such file or directory
E0505 15:33:19.462236   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/skaffold-360000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-482000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.0: (4m59.443314254s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-482000 -n default-k8s-diff-port-482000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (299.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7m97c" [46aedcad-e79a-42f6-ac97-d5495dc8605a] Running
E0505 15:33:23.927383   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/functional-341000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003615786s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7m97c" [46aedcad-e79a-42f6-ac97-d5495dc8605a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005297166s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-795000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-795000 image list --format=json
E0505 15:33:31.986954   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/old-k8s-version-872000/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-795000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-795000 -n embed-certs-795000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-795000 -n embed-certs-795000: exit status 2 (169.163179ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-795000 -n embed-certs-795000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-795000 -n embed-certs-795000: exit status 2 (168.958383ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-795000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-795000 -n embed-certs-795000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-795000 -n embed-certs-795000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (52.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-245000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.30.0
E0505 15:34:16.480564   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/no-preload-055000/client.crt: no such file or directory
E0505 15:34:29.758333   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kindnet-296000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-245000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.30.0: (52.880634499s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (52.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-245000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-245000 --alsologtostderr -v=3
E0505 15:34:40.204088   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/kubenet-296000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-245000 --alsologtostderr -v=3: (8.472912528s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-245000 -n newest-cni-245000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-245000 -n newest-cni-245000: exit status 7 (75.754467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-245000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-245000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.30.0
E0505 15:34:46.573984   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/custom-flannel-296000/client.crt: no such file or directory
E0505 15:34:54.017312   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/old-k8s-version-872000/client.crt: no such file or directory
E0505 15:34:57.419708   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/calico-296000/client.crt: no such file or directory
E0505 15:35:05.730529   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/bridge-296000/client.crt: no such file or directory
E0505 15:35:05.807091   54210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18602-53665/.minikube/profiles/false-296000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-245000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.30.0: (29.361426306s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-245000 -n newest-cni-245000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-245000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-245000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-245000 -n newest-cni-245000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-245000 -n newest-cni-245000: exit status 2 (172.776119ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-245000 -n newest-cni-245000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-245000 -n newest-cni-245000: exit status 2 (169.972609ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-245000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-245000 -n newest-cni-245000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-245000 -n newest-cni-245000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-5jmh6" [bad5c558-d71f-4530-a5f9-f4d182fef6df] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-5jmh6" [bad5c558-d71f-4530-a5f9-f4d182fef6df] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004289884s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-5jmh6" [bad5c558-d71f-4530-a5f9-f4d182fef6df] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005136398s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-482000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-482000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-482000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-482000 -n default-k8s-diff-port-482000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-482000 -n default-k8s-diff-port-482000: exit status 2 (166.642184ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-482000 -n default-k8s-diff-port-482000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-482000 -n default-k8s-diff-port-482000: exit status 2 (162.936758ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-482000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-482000 -n default-k8s-diff-port-482000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-482000 -n default-k8s-diff-port-482000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.97s)

                                                
                                    

Test skip (20/336)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-296000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-296000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-296000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-296000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-296000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-296000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-296000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-296000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-296000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-296000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-296000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-296000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-296000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-296000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-296000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-296000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-296000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-296000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-296000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-296000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-296000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-296000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-296000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-296000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-296000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-296000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-296000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-296000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-296000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-296000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-296000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-296000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-296000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-296000"

                                                
                                                
----------------------- debugLogs end: cilium-296000 [took: 5.798817301s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-296000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-296000
--- SKIP: TestNetworkPlugins/group/cilium (6.18s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-559000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-559000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.39s)

                                                
                                    
Copied to clipboard